Running Suite: Kubernetes e2e suite =================================== Random Seed: 1651269460 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Apr 29 21:57:42.155: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:57:42.157: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 29 21:57:42.183: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 29 21:57:42.244: INFO: The status of Pod cmk-init-discover-node1-gxlbt is Succeeded, skipping waiting Apr 29 21:57:42.244: INFO: The status of Pod cmk-init-discover-node2-csdn7 is Succeeded, skipping waiting Apr 29 21:57:42.244: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 29 21:57:42.244: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 29 21:57:42.244: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 29 21:57:42.261: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Apr 29 21:57:42.261: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Apr 29 21:57:42.261: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Apr 29 21:57:42.261: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Apr 29 21:57:42.261: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Apr 29 21:57:42.261: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Apr 29 21:57:42.261: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Apr 29 21:57:42.261: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 29 21:57:42.261: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Apr 29 21:57:42.261: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Apr 29 21:57:42.261: INFO: e2e test version: v1.21.9 Apr 29 21:57:42.263: INFO: kube-apiserver version: v1.21.1 Apr 29 21:57:42.263: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:57:42.269: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Apr 29 21:57:42.264: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:57:42.285: INFO: Cluster IP family: ipv4 SS ------------------------------ Apr 29 21:57:42.268: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:57:42.287: INFO: Cluster IP family: ipv4 SSS ------------------------------ Apr 29 21:57:42.271: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:57:42.293: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ Apr 29 21:57:42.279: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:57:42.301: INFO: Cluster IP family: ipv4 S ------------------------------ Apr 29 21:57:42.283: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:57:42.303: INFO: Cluster IP family: ipv4 SS ------------------------------ Apr 29 21:57:42.284: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:57:42.304: INFO: Cluster IP family: ipv4 SSS ------------------------------ Apr 29 21:57:42.285: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:57:42.307: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Apr 29 21:57:42.289: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:57:42.311: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Apr 29 21:57:42.294: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:57:42.315: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:42.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W0429 21:57:42.376659 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 21:57:42.376: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 21:57:42.378: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:57:42.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3174" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":1,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:42.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc W0429 21:57:42.321633 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 21:57:42.321: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 21:57:42.323: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics Apr 29 21:57:43.402: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Apr 29 21:57:43.469: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:57:43.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8436" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:42.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 21:57:42.534: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a45113af-316f-423b-b15d-0e1fa75f7956" in namespace "projected-2570" to be "Succeeded or Failed" Apr 29 21:57:42.536: INFO: Pod "downwardapi-volume-a45113af-316f-423b-b15d-0e1fa75f7956": Phase="Pending", Reason="", readiness=false. Elapsed: 1.854277ms Apr 29 21:57:44.541: INFO: Pod "downwardapi-volume-a45113af-316f-423b-b15d-0e1fa75f7956": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006992097s Apr 29 21:57:46.546: INFO: Pod "downwardapi-volume-a45113af-316f-423b-b15d-0e1fa75f7956": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012508045s STEP: Saw pod success Apr 29 21:57:46.547: INFO: Pod "downwardapi-volume-a45113af-316f-423b-b15d-0e1fa75f7956" satisfied condition "Succeeded or Failed" Apr 29 21:57:46.549: INFO: Trying to get logs from node node1 pod downwardapi-volume-a45113af-316f-423b-b15d-0e1fa75f7956 container client-container: STEP: delete the pod Apr 29 21:57:46.570: INFO: Waiting for pod downwardapi-volume-a45113af-316f-423b-b15d-0e1fa75f7956 to disappear Apr 29 21:57:46.572: INFO: Pod downwardapi-volume-a45113af-316f-423b-b15d-0e1fa75f7956 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:57:46.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2570" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":53,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:46.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:57:46.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9515" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":3,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:42.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl W0429 21:57:42.348320 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 21:57:42.348: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 21:57:42.350: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Apr 29 21:57:42.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5441 create -f -' Apr 29 21:57:42.732: INFO: stderr: "" Apr 29 21:57:42.732: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Apr 29 21:57:43.735: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 21:57:43.735: INFO: Found 0 / 1 Apr 29 21:57:44.737: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 21:57:44.737: INFO: Found 0 / 1 Apr 29 21:57:45.736: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 21:57:45.736: INFO: Found 0 / 1 Apr 29 21:57:46.737: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 21:57:46.737: INFO: Found 0 / 1 Apr 29 21:57:47.736: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 21:57:47.736: INFO: Found 1 / 1 Apr 29 21:57:47.736: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 29 21:57:47.738: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 21:57:47.738: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 29 21:57:47.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5441 patch pod agnhost-primary-jrv7j -p {"metadata":{"annotations":{"x":"y"}}}' Apr 29 21:57:47.919: INFO: stderr: "" Apr 29 21:57:47.919: INFO: stdout: "pod/agnhost-primary-jrv7j patched\n" STEP: checking annotations Apr 29 21:57:47.922: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 21:57:47.922: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:57:47.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5441" for this suite. • [SLOW TEST:5.604 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":1,"skipped":24,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:42.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl W0429 21:57:42.395490 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 21:57:42.395: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 21:57:42.397: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Apr 29 21:57:42.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4645 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' Apr 29 21:57:42.664: INFO: stderr: "" Apr 29 21:57:42.664: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 Apr 29 21:57:42.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4645 delete pods e2e-test-httpd-pod' Apr 29 21:57:49.905: INFO: stderr: "" Apr 29 21:57:49.905: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:57:49.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4645" for this suite. • [SLOW TEST:7.540 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":1,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:42.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W0429 21:57:42.322767 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 21:57:42.323: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 21:57:42.325: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 21:57:42.343: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d4cc542-617f-45f5-91bf-c594dec1a8ff" in namespace "downward-api-5091" to be "Succeeded or Failed" Apr 29 21:57:42.349: INFO: Pod "downwardapi-volume-4d4cc542-617f-45f5-91bf-c594dec1a8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 5.267516ms Apr 29 21:57:44.353: INFO: Pod "downwardapi-volume-4d4cc542-617f-45f5-91bf-c594dec1a8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009341262s Apr 29 21:57:46.357: INFO: Pod "downwardapi-volume-4d4cc542-617f-45f5-91bf-c594dec1a8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013095242s Apr 29 21:57:48.360: INFO: Pod "downwardapi-volume-4d4cc542-617f-45f5-91bf-c594dec1a8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01686258s Apr 29 21:57:50.364: INFO: Pod "downwardapi-volume-4d4cc542-617f-45f5-91bf-c594dec1a8ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021012171s STEP: Saw pod success Apr 29 21:57:50.364: INFO: Pod "downwardapi-volume-4d4cc542-617f-45f5-91bf-c594dec1a8ff" satisfied condition "Succeeded or Failed" Apr 29 21:57:50.367: INFO: Trying to get logs from node node2 pod downwardapi-volume-4d4cc542-617f-45f5-91bf-c594dec1a8ff container client-container: STEP: delete the pod Apr 29 21:57:50.388: INFO: Waiting for pod downwardapi-volume-4d4cc542-617f-45f5-91bf-c594dec1a8ff to disappear Apr 29 21:57:50.389: INFO: Pod downwardapi-volume-4d4cc542-617f-45f5-91bf-c594dec1a8ff no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:57:50.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5091" for this suite. • [SLOW TEST:8.093 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:47.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments Apr 29 21:57:48.020: INFO: Waiting up to 5m0s for pod "client-containers-2c63e7c5-b5e9-4576-8c14-017ad2078e11" in namespace "containers-3815" to be "Succeeded or Failed" Apr 29 21:57:48.022: INFO: Pod "client-containers-2c63e7c5-b5e9-4576-8c14-017ad2078e11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114902ms Apr 29 21:57:50.026: INFO: Pod "client-containers-2c63e7c5-b5e9-4576-8c14-017ad2078e11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006008808s Apr 29 21:57:52.029: INFO: Pod "client-containers-2c63e7c5-b5e9-4576-8c14-017ad2078e11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008534139s STEP: Saw pod success Apr 29 21:57:52.029: INFO: Pod "client-containers-2c63e7c5-b5e9-4576-8c14-017ad2078e11" satisfied condition "Succeeded or Failed" Apr 29 21:57:52.031: INFO: Trying to get logs from node node1 pod client-containers-2c63e7c5-b5e9-4576-8c14-017ad2078e11 container agnhost-container: STEP: delete the pod Apr 29 21:57:52.081: INFO: Waiting for pod client-containers-2c63e7c5-b5e9-4576-8c14-017ad2078e11 to disappear Apr 29 21:57:52.083: INFO: Pod client-containers-2c63e7c5-b5e9-4576-8c14-017ad2078e11 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:57:52.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3815" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":47,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:46.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:57:52.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-1457" for this suite. • [SLOW TEST:6.070 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":4,"skipped":161,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:42.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion W0429 21:57:42.315786 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 21:57:42.316: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 21:57:42.319: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition Apr 29 21:57:42.334: INFO: Waiting up to 5m0s for pod "var-expansion-e366d779-f4c1-4613-9f81-26d34e13fd6d" in namespace "var-expansion-4462" to be "Succeeded or Failed" Apr 29 21:57:42.336: INFO: Pod "var-expansion-e366d779-f4c1-4613-9f81-26d34e13fd6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065216ms Apr 29 21:57:44.342: INFO: Pod "var-expansion-e366d779-f4c1-4613-9f81-26d34e13fd6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007372906s Apr 29 21:57:46.345: INFO: Pod "var-expansion-e366d779-f4c1-4613-9f81-26d34e13fd6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011016126s Apr 29 21:57:48.350: INFO: Pod "var-expansion-e366d779-f4c1-4613-9f81-26d34e13fd6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01589371s Apr 29 21:57:50.355: INFO: Pod "var-expansion-e366d779-f4c1-4613-9f81-26d34e13fd6d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020825843s Apr 29 21:57:52.357: INFO: Pod "var-expansion-e366d779-f4c1-4613-9f81-26d34e13fd6d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023153371s Apr 29 21:57:54.361: INFO: Pod "var-expansion-e366d779-f4c1-4613-9f81-26d34e13fd6d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027040459s Apr 29 21:57:56.365: INFO: Pod "var-expansion-e366d779-f4c1-4613-9f81-26d34e13fd6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.030781823s STEP: Saw pod success Apr 29 21:57:56.365: INFO: Pod "var-expansion-e366d779-f4c1-4613-9f81-26d34e13fd6d" satisfied condition "Succeeded or Failed" Apr 29 21:57:56.368: INFO: Trying to get logs from node node2 pod var-expansion-e366d779-f4c1-4613-9f81-26d34e13fd6d container dapi-container: STEP: delete the pod Apr 29 21:57:56.380: INFO: Waiting for pod var-expansion-e366d779-f4c1-4613-9f81-26d34e13fd6d to disappear Apr 29 21:57:56.382: INFO: Pod var-expansion-e366d779-f4c1-4613-9f81-26d34e13fd6d no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:57:56.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4462" for this suite. • [SLOW TEST:14.098 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:52.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-8515/secret-test-84d9a74e-8e01-4984-8d58-248364752c30 STEP: Creating a pod to test consume secrets Apr 29 21:57:53.020: INFO: Waiting up to 5m0s for pod "pod-configmaps-6eb63311-95b4-4e25-a7b4-f9ee9d7513bc" in namespace "secrets-8515" to be "Succeeded or Failed" Apr 29 21:57:53.023: INFO: Pod "pod-configmaps-6eb63311-95b4-4e25-a7b4-f9ee9d7513bc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.831384ms Apr 29 21:57:55.027: INFO: Pod "pod-configmaps-6eb63311-95b4-4e25-a7b4-f9ee9d7513bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007139938s Apr 29 21:57:57.031: INFO: Pod "pod-configmaps-6eb63311-95b4-4e25-a7b4-f9ee9d7513bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011534868s STEP: Saw pod success Apr 29 21:57:57.031: INFO: Pod "pod-configmaps-6eb63311-95b4-4e25-a7b4-f9ee9d7513bc" satisfied condition "Succeeded or Failed" Apr 29 21:57:57.033: INFO: Trying to get logs from node node1 pod pod-configmaps-6eb63311-95b4-4e25-a7b4-f9ee9d7513bc container env-test: STEP: delete the pod Apr 29 21:57:57.046: INFO: Waiting for pod pod-configmaps-6eb63311-95b4-4e25-a7b4-f9ee9d7513bc to disappear Apr 29 21:57:57.047: INFO: Pod pod-configmaps-6eb63311-95b4-4e25-a7b4-f9ee9d7513bc no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:57:57.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8515" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":179,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:52.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Apr 29 21:57:52.187: INFO: The status of Pod annotationupdate591f9851-e9b8-4e24-914e-171f19fcc16a is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:57:54.192: INFO: The status of Pod annotationupdate591f9851-e9b8-4e24-914e-171f19fcc16a is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:57:56.191: INFO: The status of Pod annotationupdate591f9851-e9b8-4e24-914e-171f19fcc16a is Running (Ready = true) Apr 29 21:57:56.918: INFO: Successfully updated pod "annotationupdate591f9851-e9b8-4e24-914e-171f19fcc16a" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:00.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4433" for this suite. • [SLOW TEST:8.800 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":72,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:56.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Apr 29 21:57:56.437: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:02.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3069" for this suite. • [SLOW TEST:6.567 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:01.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 29 21:58:01.038: INFO: Waiting up to 5m0s for pod "security-context-5967fd27-16ab-47c6-9464-ba9ad7d10b8b" in namespace "security-context-7944" to be "Succeeded or Failed" Apr 29 21:58:01.040: INFO: Pod "security-context-5967fd27-16ab-47c6-9464-ba9ad7d10b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.993193ms Apr 29 21:58:03.044: INFO: Pod "security-context-5967fd27-16ab-47c6-9464-ba9ad7d10b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005841995s Apr 29 21:58:05.048: INFO: Pod "security-context-5967fd27-16ab-47c6-9464-ba9ad7d10b8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010190616s STEP: Saw pod success Apr 29 21:58:05.048: INFO: Pod "security-context-5967fd27-16ab-47c6-9464-ba9ad7d10b8b" satisfied condition "Succeeded or Failed" Apr 29 21:58:05.050: INFO: Trying to get logs from node node1 pod security-context-5967fd27-16ab-47c6-9464-ba9ad7d10b8b container test-container: STEP: delete the pod Apr 29 21:58:05.064: INFO: Waiting for pod security-context-5967fd27-16ab-47c6-9464-ba9ad7d10b8b to disappear Apr 29 21:58:05.066: INFO: Pod security-context-5967fd27-16ab-47c6-9464-ba9ad7d10b8b no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:05.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7944" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":94,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:42.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook W0429 21:57:42.341723 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 21:57:42.342: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 21:57:42.343: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Apr 29 21:57:42.375: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:57:44.378: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:57:46.379: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:57:48.378: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:57:50.379: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Apr 29 21:57:50.393: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:57:52.395: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:57:54.397: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:57:56.396: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:57:58.397: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:58:00.398: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 29 21:58:00.412: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 29 21:58:00.415: INFO: Pod pod-with-poststart-http-hook still exists Apr 29 21:58:02.415: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 29 21:58:02.418: INFO: Pod pod-with-poststart-http-hook still exists Apr 29 21:58:04.417: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 29 21:58:04.419: INFO: Pod pod-with-poststart-http-hook still exists Apr 29 21:58:06.415: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 29 21:58:06.418: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:06.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5153" for this suite. • [SLOW TEST:24.108 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:49.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3170.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3170.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3170.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3170.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3170.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3170.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 21:58:08.049: INFO: DNS probes using dns-3170/dns-test-9a023be3-a297-4165-aa37-5d3d11f5787f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:08.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3170" for this suite. • [SLOW TEST:18.091 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:42.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns W0429 21:57:42.332439 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 21:57:42.332: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 21:57:42.334: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8190.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8190.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8190.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8190.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8190.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8190.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 21:58:08.396: INFO: DNS probes using dns-8190/dns-test-5aa67b27-6972-49f4-bab1-cf7235cb4b9b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:08.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8190" for this suite. • [SLOW TEST:26.101 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:08.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version Apr 29 21:58:08.453: INFO: Major version: 1 STEP: Confirm minor version Apr 29 21:58:08.453: INFO: cleanMinorVersion: 21 Apr 29 21:58:08.453: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:08.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-3517" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0} SSS ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0} [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:02.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-943c2cdc-8fc7-4a15-b034-68efd75d9f33 STEP: Creating secret with name secret-projected-all-test-volume-256990f6-9099-44e2-a74b-1b7a134d5729 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 29 21:58:03.022: INFO: Waiting up to 5m0s for pod "projected-volume-243e38b2-391c-42a5-ae84-f6d27b74cd84" in namespace "projected-534" to be "Succeeded or Failed" Apr 29 21:58:03.024: INFO: Pod "projected-volume-243e38b2-391c-42a5-ae84-f6d27b74cd84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213404ms Apr 29 21:58:05.028: INFO: Pod "projected-volume-243e38b2-391c-42a5-ae84-f6d27b74cd84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00537573s Apr 29 21:58:07.032: INFO: Pod "projected-volume-243e38b2-391c-42a5-ae84-f6d27b74cd84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010340664s Apr 29 21:58:09.036: INFO: Pod "projected-volume-243e38b2-391c-42a5-ae84-f6d27b74cd84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014274442s STEP: Saw pod success Apr 29 21:58:09.036: INFO: Pod "projected-volume-243e38b2-391c-42a5-ae84-f6d27b74cd84" satisfied condition "Succeeded or Failed" Apr 29 21:58:09.042: INFO: Trying to get logs from node node2 pod projected-volume-243e38b2-391c-42a5-ae84-f6d27b74cd84 container projected-all-volume-test: STEP: delete the pod Apr 29 21:58:09.107: INFO: Waiting for pod projected-volume-243e38b2-391c-42a5-ae84-f6d27b74cd84 to disappear Apr 29 21:58:09.109: INFO: Pod projected-volume-243e38b2-391c-42a5-ae84-f6d27b74cd84 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:09.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-534" for this suite. • [SLOW TEST:6.135 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:05.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:09.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3958" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":5,"skipped":98,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:09.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 29 21:58:09.211: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1335 8d3ef0c8-1b72-4e07-8282-783be98a758f 32106 0 2022-04-29 21:58:09 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-04-29 21:58:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 21:58:09.211: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1335 8d3ef0c8-1b72-4e07-8282-783be98a758f 32107 0 2022-04-29 21:58:09 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-04-29 21:58:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:09.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1335" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":6,"skipped":102,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:42.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0429 21:57:42.345883 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 21:57:42.346: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 21:57:42.347: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 21:57:42.370: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:57:44.374: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:57:46.375: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:57:48.374: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:57:50.374: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Running (Ready = false) Apr 29 21:57:52.374: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Running (Ready = false) Apr 29 21:57:54.374: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Running (Ready = false) Apr 29 21:57:56.373: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Running (Ready = false) Apr 29 21:57:58.374: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Running (Ready = false) Apr 29 21:58:00.377: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Running (Ready = false) Apr 29 21:58:02.373: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Running (Ready = false) Apr 29 21:58:04.376: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Running (Ready = false) Apr 29 21:58:06.374: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Running (Ready = false) Apr 29 21:58:08.373: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Running (Ready = false) Apr 29 21:58:10.374: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Running (Ready = false) Apr 29 21:58:12.374: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Running (Ready = false) Apr 29 21:58:14.375: INFO: The status of Pod test-webserver-b74fae58-80ec-4aa7-af3d-dc8de04ac735 is Running (Ready = true) Apr 29 21:58:14.377: INFO: Container started at 2022-04-29 21:57:48 +0000 UTC, pod became ready at 2022-04-29 21:58:12 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:14.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-502" for this suite. • [SLOW TEST:32.059 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:09.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-94f48e56-542e-4880-babf-38579676c177 STEP: Creating a pod to test consume configMaps Apr 29 21:58:09.182: INFO: Waiting up to 5m0s for pod "pod-configmaps-c77ec3ee-da82-4997-a47d-88bbd93bc5dd" in namespace "configmap-9098" to be "Succeeded or Failed" Apr 29 21:58:09.186: INFO: Pod "pod-configmaps-c77ec3ee-da82-4997-a47d-88bbd93bc5dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.754841ms Apr 29 21:58:11.191: INFO: Pod "pod-configmaps-c77ec3ee-da82-4997-a47d-88bbd93bc5dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00844665s Apr 29 21:58:13.194: INFO: Pod "pod-configmaps-c77ec3ee-da82-4997-a47d-88bbd93bc5dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011135976s Apr 29 21:58:15.198: INFO: Pod "pod-configmaps-c77ec3ee-da82-4997-a47d-88bbd93bc5dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015069689s STEP: Saw pod success Apr 29 21:58:15.198: INFO: Pod "pod-configmaps-c77ec3ee-da82-4997-a47d-88bbd93bc5dd" satisfied condition "Succeeded or Failed" Apr 29 21:58:15.199: INFO: Trying to get logs from node node2 pod pod-configmaps-c77ec3ee-da82-4997-a47d-88bbd93bc5dd container agnhost-container: STEP: delete the pod Apr 29 21:58:15.213: INFO: Waiting for pod pod-configmaps-c77ec3ee-da82-4997-a47d-88bbd93bc5dd to disappear Apr 29 21:58:15.215: INFO: Pod pod-configmaps-c77ec3ee-da82-4997-a47d-88bbd93bc5dd no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:15.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9098" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:08.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running Apr 29 21:58:10.159: INFO: running pods: 0 < 1 Apr 29 21:58:12.162: INFO: running pods: 0 < 1 Apr 29 21:58:14.163: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:16.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6190" for this suite. • [SLOW TEST:8.087 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":3,"skipped":85,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:08.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 21:58:08.500: INFO: The status of Pod server-envvars-0c79f5cc-3a12-40de-8d25-65f3c1a6f2b2 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:58:10.503: INFO: The status of Pod server-envvars-0c79f5cc-3a12-40de-8d25-65f3c1a6f2b2 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:58:12.504: INFO: The status of Pod server-envvars-0c79f5cc-3a12-40de-8d25-65f3c1a6f2b2 is Running (Ready = true) Apr 29 21:58:12.538: INFO: Waiting up to 5m0s for pod "client-envvars-70e58b8d-93d4-46f1-87bb-ada02894458a" in namespace "pods-8080" to be "Succeeded or Failed" Apr 29 21:58:12.540: INFO: Pod "client-envvars-70e58b8d-93d4-46f1-87bb-ada02894458a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032582ms Apr 29 21:58:14.543: INFO: Pod "client-envvars-70e58b8d-93d4-46f1-87bb-ada02894458a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005627244s Apr 29 21:58:16.547: INFO: Pod "client-envvars-70e58b8d-93d4-46f1-87bb-ada02894458a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008822651s STEP: Saw pod success Apr 29 21:58:16.547: INFO: Pod "client-envvars-70e58b8d-93d4-46f1-87bb-ada02894458a" satisfied condition "Succeeded or Failed" Apr 29 21:58:16.550: INFO: Trying to get logs from node node1 pod client-envvars-70e58b8d-93d4-46f1-87bb-ada02894458a container env3cont: STEP: delete the pod Apr 29 21:58:16.562: INFO: Waiting for pod client-envvars-70e58b8d-93d4-46f1-87bb-ada02894458a to disappear Apr 29 21:58:16.565: INFO: Pod client-envvars-70e58b8d-93d4-46f1-87bb-ada02894458a no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:16.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8080" for this suite. • [SLOW TEST:8.106 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:06.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 29 21:58:06.608: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9530 e7cd2107-03e3-445f-aa92-22e8270f7476 32003 0 2022-04-29 21:58:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-29 21:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 21:58:06.609: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9530 e7cd2107-03e3-445f-aa92-22e8270f7476 32004 0 2022-04-29 21:58:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-29 21:58:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 21:58:06.609: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9530 e7cd2107-03e3-445f-aa92-22e8270f7476 32005 0 2022-04-29 21:58:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-29 21:58:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 29 21:58:16.628: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9530 e7cd2107-03e3-445f-aa92-22e8270f7476 32373 0 2022-04-29 21:58:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-29 21:58:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 21:58:16.629: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9530 e7cd2107-03e3-445f-aa92-22e8270f7476 32374 0 2022-04-29 21:58:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-29 21:58:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 21:58:16.629: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9530 e7cd2107-03e3-445f-aa92-22e8270f7476 32376 0 2022-04-29 21:58:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-29 21:58:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:16.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9530" for this suite. • [SLOW TEST:10.060 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":2,"skipped":84,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:50.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 29 21:57:50.426: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:57:58.547: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:19.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6532" for this suite. • [SLOW TEST:28.991 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:14.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 21:58:14.714: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 21:58:16.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866294, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866294, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866294, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866294, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 21:58:19.733: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:19.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3679" for this suite. STEP: Destroying namespace "webhook-3679-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.411 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:16.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-9b858709-ffef-4e3f-9458-541a901a56dd STEP: Creating a pod to test consume secrets Apr 29 21:58:16.755: INFO: Waiting up to 5m0s for pod "pod-secrets-1ee88bfa-79ab-44cb-ae5a-861e90ebd442" in namespace "secrets-2329" to be "Succeeded or Failed" Apr 29 21:58:16.758: INFO: Pod "pod-secrets-1ee88bfa-79ab-44cb-ae5a-861e90ebd442": Phase="Pending", Reason="", readiness=false. Elapsed: 2.698727ms Apr 29 21:58:18.762: INFO: Pod "pod-secrets-1ee88bfa-79ab-44cb-ae5a-861e90ebd442": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00650903s Apr 29 21:58:20.765: INFO: Pod "pod-secrets-1ee88bfa-79ab-44cb-ae5a-861e90ebd442": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009452097s STEP: Saw pod success Apr 29 21:58:20.765: INFO: Pod "pod-secrets-1ee88bfa-79ab-44cb-ae5a-861e90ebd442" satisfied condition "Succeeded or Failed" Apr 29 21:58:20.767: INFO: Trying to get logs from node node2 pod pod-secrets-1ee88bfa-79ab-44cb-ae5a-861e90ebd442 container secret-volume-test: STEP: delete the pod Apr 29 21:58:20.814: INFO: Waiting for pod pod-secrets-1ee88bfa-79ab-44cb-ae5a-861e90ebd442 to disappear Apr 29 21:58:20.816: INFO: Pod pod-secrets-1ee88bfa-79ab-44cb-ae5a-861e90ebd442 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:20.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2329" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":123,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:16.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 29 21:58:16.244: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 29 21:58:21.247: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:22.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6877" for this suite. • [SLOW TEST:6.053 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":4,"skipped":93,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:16.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-7eda3fde-31b6-436b-9be6-780b3e1ee285 STEP: Creating a pod to test consume secrets Apr 29 21:58:16.654: INFO: Waiting up to 5m0s for pod "pod-secrets-dd1ab357-a305-48bf-ba5e-295db329e19e" in namespace "secrets-6731" to be "Succeeded or Failed" Apr 29 21:58:16.656: INFO: Pod "pod-secrets-dd1ab357-a305-48bf-ba5e-295db329e19e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.735736ms Apr 29 21:58:18.659: INFO: Pod "pod-secrets-dd1ab357-a305-48bf-ba5e-295db329e19e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0042801s Apr 29 21:58:20.662: INFO: Pod "pod-secrets-dd1ab357-a305-48bf-ba5e-295db329e19e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007674879s Apr 29 21:58:22.666: INFO: Pod "pod-secrets-dd1ab357-a305-48bf-ba5e-295db329e19e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011123675s Apr 29 21:58:24.669: INFO: Pod "pod-secrets-dd1ab357-a305-48bf-ba5e-295db329e19e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.014302945s STEP: Saw pod success Apr 29 21:58:24.669: INFO: Pod "pod-secrets-dd1ab357-a305-48bf-ba5e-295db329e19e" satisfied condition "Succeeded or Failed" Apr 29 21:58:24.671: INFO: Trying to get logs from node node1 pod pod-secrets-dd1ab357-a305-48bf-ba5e-295db329e19e container secret-volume-test: STEP: delete the pod Apr 29 21:58:24.684: INFO: Waiting for pod pod-secrets-dd1ab357-a305-48bf-ba5e-295db329e19e to disappear Apr 29 21:58:24.686: INFO: Pod pod-secrets-dd1ab357-a305-48bf-ba5e-295db329e19e no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:24.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6731" for this suite. • [SLOW TEST:8.073 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":31,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:19.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:24.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2214" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":3,"skipped":26,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:15.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-89a79d5f-1466-4354-9577-fb1573875243 Apr 29 21:58:15.289: INFO: Pod name my-hostname-basic-89a79d5f-1466-4354-9577-fb1573875243: Found 0 pods out of 1 Apr 29 21:58:20.294: INFO: Pod name my-hostname-basic-89a79d5f-1466-4354-9577-fb1573875243: Found 1 pods out of 1 Apr 29 21:58:20.294: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-89a79d5f-1466-4354-9577-fb1573875243" are running Apr 29 21:58:20.296: INFO: Pod "my-hostname-basic-89a79d5f-1466-4354-9577-fb1573875243-vxxrr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 21:58:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 21:58:17 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 21:58:17 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 21:58:15 +0000 UTC Reason: Message:}]) Apr 29 21:58:20.298: INFO: Trying to dial the pod Apr 29 21:58:25.308: INFO: Controller my-hostname-basic-89a79d5f-1466-4354-9577-fb1573875243: Got expected result from replica 1 [my-hostname-basic-89a79d5f-1466-4354-9577-fb1573875243-vxxrr]: "my-hostname-basic-89a79d5f-1466-4354-9577-fb1573875243-vxxrr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:25.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8083" for this suite. • [SLOW TEST:10.055 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:20.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command Apr 29 21:58:20.870: INFO: Waiting up to 5m0s for pod "var-expansion-e59eeb16-a6a1-4d1f-ae32-3470675dff79" in namespace "var-expansion-9853" to be "Succeeded or Failed" Apr 29 21:58:20.872: INFO: Pod "var-expansion-e59eeb16-a6a1-4d1f-ae32-3470675dff79": Phase="Pending", Reason="", readiness=false. Elapsed: 1.952836ms Apr 29 21:58:22.876: INFO: Pod "var-expansion-e59eeb16-a6a1-4d1f-ae32-3470675dff79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005831514s Apr 29 21:58:24.880: INFO: Pod "var-expansion-e59eeb16-a6a1-4d1f-ae32-3470675dff79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010010308s Apr 29 21:58:26.884: INFO: Pod "var-expansion-e59eeb16-a6a1-4d1f-ae32-3470675dff79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013692779s Apr 29 21:58:28.887: INFO: Pod "var-expansion-e59eeb16-a6a1-4d1f-ae32-3470675dff79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016895532s STEP: Saw pod success Apr 29 21:58:28.887: INFO: Pod "var-expansion-e59eeb16-a6a1-4d1f-ae32-3470675dff79" satisfied condition "Succeeded or Failed" Apr 29 21:58:28.889: INFO: Trying to get logs from node node2 pod var-expansion-e59eeb16-a6a1-4d1f-ae32-3470675dff79 container dapi-container: STEP: delete the pod Apr 29 21:58:28.902: INFO: Waiting for pod var-expansion-e59eeb16-a6a1-4d1f-ae32-3470675dff79 to disappear Apr 29 21:58:28.904: INFO: Pod var-expansion-e59eeb16-a6a1-4d1f-ae32-3470675dff79 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:28.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9853" for this suite. • [SLOW TEST:8.076 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:24.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 21:58:25.125: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 21:58:27.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866305, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866305, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866305, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866305, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 21:58:29.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866305, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866305, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866305, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866305, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 21:58:32.142: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:32.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4731" for this suite. STEP: Destroying namespace "webhook-4731-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.559 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":4,"skipped":28,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:25.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container Apr 29 21:58:33.369: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1674 PodName:pod-sharedvolume-90259980-e2c7-41dc-b1ed-76ac508116b6 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:58:33.369: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:58:33.504: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:33.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1674" for this suite. • [SLOW TEST:8.180 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":6,"skipped":44,"failed":0} SSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:29.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 21:58:29.149: INFO: Creating pod... Apr 29 21:58:29.164: INFO: Pod Quantity: 1 Status: Pending Apr 29 21:58:30.168: INFO: Pod Quantity: 1 Status: Pending Apr 29 21:58:31.167: INFO: Pod Quantity: 1 Status: Pending Apr 29 21:58:32.167: INFO: Pod Quantity: 1 Status: Pending Apr 29 21:58:33.167: INFO: Pod Quantity: 1 Status: Pending Apr 29 21:58:34.167: INFO: Pod Status: Running Apr 29 21:58:34.167: INFO: Creating service... Apr 29 21:58:34.173: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-7344/pods/agnhost/proxy/some/path/with/DELETE Apr 29 21:58:34.176: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Apr 29 21:58:34.176: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-7344/pods/agnhost/proxy/some/path/with/GET Apr 29 21:58:34.179: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Apr 29 21:58:34.179: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-7344/pods/agnhost/proxy/some/path/with/HEAD Apr 29 21:58:34.181: INFO: http.Client request:HEAD | StatusCode:200 Apr 29 21:58:34.181: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-7344/pods/agnhost/proxy/some/path/with/OPTIONS Apr 29 21:58:34.183: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Apr 29 21:58:34.183: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-7344/pods/agnhost/proxy/some/path/with/PATCH Apr 29 21:58:34.186: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Apr 29 21:58:34.186: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-7344/pods/agnhost/proxy/some/path/with/POST Apr 29 21:58:34.188: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Apr 29 21:58:34.188: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-7344/pods/agnhost/proxy/some/path/with/PUT Apr 29 21:58:34.190: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Apr 29 21:58:34.190: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-7344/services/test-service/proxy/some/path/with/DELETE Apr 29 21:58:34.194: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Apr 29 21:58:34.194: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-7344/services/test-service/proxy/some/path/with/GET Apr 29 21:58:34.197: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Apr 29 21:58:34.197: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-7344/services/test-service/proxy/some/path/with/HEAD Apr 29 21:58:34.199: INFO: http.Client request:HEAD | StatusCode:200 Apr 29 21:58:34.199: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-7344/services/test-service/proxy/some/path/with/OPTIONS Apr 29 21:58:34.203: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Apr 29 21:58:34.203: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-7344/services/test-service/proxy/some/path/with/PATCH Apr 29 21:58:34.206: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Apr 29 21:58:34.206: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-7344/services/test-service/proxy/some/path/with/POST Apr 29 21:58:34.208: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Apr 29 21:58:34.208: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-7344/services/test-service/proxy/some/path/with/PUT Apr 29 21:58:34.212: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:34.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7344" for this suite. • [SLOW TEST:5.096 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":5,"skipped":230,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:24.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:37.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8337" for this suite. • [SLOW TEST:13.095 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":5,"skipped":35,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:32.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Apr 29 21:58:32.390: INFO: The status of Pod labelsupdate44a1b8c6-6947-4679-8583-c77559e940cb is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:58:34.394: INFO: The status of Pod labelsupdate44a1b8c6-6947-4679-8583-c77559e940cb is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:58:36.393: INFO: The status of Pod labelsupdate44a1b8c6-6947-4679-8583-c77559e940cb is Running (Ready = true) Apr 29 21:58:36.910: INFO: Successfully updated pod "labelsupdate44a1b8c6-6947-4679-8583-c77559e940cb" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:38.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1247" for this suite. • [SLOW TEST:6.574 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:33.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Apr 29 21:58:33.548: INFO: namespace kubectl-5954 Apr 29 21:58:33.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5954 create -f -' Apr 29 21:58:33.954: INFO: stderr: "" Apr 29 21:58:33.954: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Apr 29 21:58:34.957: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 21:58:34.957: INFO: Found 0 / 1 Apr 29 21:58:35.957: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 21:58:35.957: INFO: Found 0 / 1 Apr 29 21:58:36.958: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 21:58:36.958: INFO: Found 1 / 1 Apr 29 21:58:36.958: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 29 21:58:36.960: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 21:58:36.960: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 29 21:58:36.960: INFO: wait on agnhost-primary startup in kubectl-5954 Apr 29 21:58:36.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5954 logs agnhost-primary-bv6nh agnhost-primary' Apr 29 21:58:37.122: INFO: stderr: "" Apr 29 21:58:37.122: INFO: stdout: "Paused\n" STEP: exposing RC Apr 29 21:58:37.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5954 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Apr 29 21:58:37.338: INFO: stderr: "" Apr 29 21:58:37.338: INFO: stdout: "service/rm2 exposed\n" Apr 29 21:58:37.340: INFO: Service rm2 in namespace kubectl-5954 found. STEP: exposing service Apr 29 21:58:39.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5954 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Apr 29 21:58:39.555: INFO: stderr: "" Apr 29 21:58:39.555: INFO: stdout: "service/rm3 exposed\n" Apr 29 21:58:39.557: INFO: Service rm3 in namespace kubectl-5954 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:41.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5954" for this suite. • [SLOW TEST:8.041 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":7,"skipped":50,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:37.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-b41ef49b-2c07-47d8-bcef-d85909b58b30 STEP: Creating a pod to test consume configMaps Apr 29 21:58:37.838: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d24fe1b2-e657-41c1-a46b-7fae6f0364e7" in namespace "projected-2930" to be "Succeeded or Failed" Apr 29 21:58:37.840: INFO: Pod "pod-projected-configmaps-d24fe1b2-e657-41c1-a46b-7fae6f0364e7": Phase="Pending", Reason="", readiness=false. Elapsed: 1.912868ms Apr 29 21:58:39.844: INFO: Pod "pod-projected-configmaps-d24fe1b2-e657-41c1-a46b-7fae6f0364e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005012302s Apr 29 21:58:41.847: INFO: Pod "pod-projected-configmaps-d24fe1b2-e657-41c1-a46b-7fae6f0364e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008025871s STEP: Saw pod success Apr 29 21:58:41.847: INFO: Pod "pod-projected-configmaps-d24fe1b2-e657-41c1-a46b-7fae6f0364e7" satisfied condition "Succeeded or Failed" Apr 29 21:58:41.849: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-d24fe1b2-e657-41c1-a46b-7fae6f0364e7 container agnhost-container: STEP: delete the pod Apr 29 21:58:41.861: INFO: Waiting for pod pod-projected-configmaps-d24fe1b2-e657-41c1-a46b-7fae6f0364e7 to disappear Apr 29 21:58:41.864: INFO: Pod pod-projected-configmaps-d24fe1b2-e657-41c1-a46b-7fae6f0364e7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:41.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2930" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":36,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:42.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition W0429 21:57:42.380624 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 21:57:42.380: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 21:57:42.382: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 21:57:42.385: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:43.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6041" for this suite. • [SLOW TEST:61.346 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":1,"skipped":21,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:38.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-dc09c670-b6d1-4177-bb5a-b755c0d78497 STEP: Creating a pod to test consume secrets Apr 29 21:58:39.002: INFO: Waiting up to 5m0s for pod "pod-secrets-0c3d2991-e2ce-4902-ba95-88817298fca1" in namespace "secrets-9339" to be "Succeeded or Failed" Apr 29 21:58:39.004: INFO: Pod "pod-secrets-0c3d2991-e2ce-4902-ba95-88817298fca1": Phase="Pending", Reason="", readiness=false. Elapsed: 1.917304ms Apr 29 21:58:41.006: INFO: Pod "pod-secrets-0c3d2991-e2ce-4902-ba95-88817298fca1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004853436s Apr 29 21:58:43.011: INFO: Pod "pod-secrets-0c3d2991-e2ce-4902-ba95-88817298fca1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009045801s Apr 29 21:58:45.014: INFO: Pod "pod-secrets-0c3d2991-e2ce-4902-ba95-88817298fca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01285419s STEP: Saw pod success Apr 29 21:58:45.014: INFO: Pod "pod-secrets-0c3d2991-e2ce-4902-ba95-88817298fca1" satisfied condition "Succeeded or Failed" Apr 29 21:58:45.018: INFO: Trying to get logs from node node2 pod pod-secrets-0c3d2991-e2ce-4902-ba95-88817298fca1 container secret-volume-test: STEP: delete the pod Apr 29 21:58:45.106: INFO: Waiting for pod pod-secrets-0c3d2991-e2ce-4902-ba95-88817298fca1 to disappear Apr 29 21:58:45.108: INFO: Pod pod-secrets-0c3d2991-e2ce-4902-ba95-88817298fca1 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:45.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9339" for this suite. • [SLOW TEST:6.154 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":57,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:41.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Apr 29 21:58:41.656: INFO: Waiting up to 5m0s for pod "downward-api-de7888f3-9f5d-4244-93fe-5324ad589d81" in namespace "downward-api-7364" to be "Succeeded or Failed" Apr 29 21:58:41.658: INFO: Pod "downward-api-de7888f3-9f5d-4244-93fe-5324ad589d81": Phase="Pending", Reason="", readiness=false. Elapsed: 1.941641ms Apr 29 21:58:43.661: INFO: Pod "downward-api-de7888f3-9f5d-4244-93fe-5324ad589d81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005078344s Apr 29 21:58:45.663: INFO: Pod "downward-api-de7888f3-9f5d-4244-93fe-5324ad589d81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007359566s STEP: Saw pod success Apr 29 21:58:45.663: INFO: Pod "downward-api-de7888f3-9f5d-4244-93fe-5324ad589d81" satisfied condition "Succeeded or Failed" Apr 29 21:58:45.665: INFO: Trying to get logs from node node2 pod downward-api-de7888f3-9f5d-4244-93fe-5324ad589d81 container dapi-container: STEP: delete the pod Apr 29 21:58:45.677: INFO: Waiting for pod downward-api-de7888f3-9f5d-4244-93fe-5324ad589d81 to disappear Apr 29 21:58:45.679: INFO: Pod downward-api-de7888f3-9f5d-4244-93fe-5324ad589d81 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:45.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7364" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:43.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-94b40abb-d88e-4f2a-bc91-2b521284cc16 STEP: Creating a pod to test consume configMaps Apr 29 21:58:43.749: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-93621be5-d91f-4233-81d9-6d77404bf2c5" in namespace "projected-8502" to be "Succeeded or Failed" Apr 29 21:58:43.752: INFO: Pod "pod-projected-configmaps-93621be5-d91f-4233-81d9-6d77404bf2c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331319ms Apr 29 21:58:45.754: INFO: Pod "pod-projected-configmaps-93621be5-d91f-4233-81d9-6d77404bf2c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004989456s Apr 29 21:58:47.760: INFO: Pod "pod-projected-configmaps-93621be5-d91f-4233-81d9-6d77404bf2c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010387082s STEP: Saw pod success Apr 29 21:58:47.760: INFO: Pod "pod-projected-configmaps-93621be5-d91f-4233-81d9-6d77404bf2c5" satisfied condition "Succeeded or Failed" Apr 29 21:58:47.762: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-93621be5-d91f-4233-81d9-6d77404bf2c5 container agnhost-container: STEP: delete the pod Apr 29 21:58:47.775: INFO: Waiting for pod pod-projected-configmaps-93621be5-d91f-4233-81d9-6d77404bf2c5 to disappear Apr 29 21:58:47.778: INFO: Pod pod-projected-configmaps-93621be5-d91f-4233-81d9-6d77404bf2c5 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:47.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8502" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:45.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 21:58:45.250: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf0b1dbb-e8f7-45f5-8acf-c35030dec81b" in namespace "projected-9317" to be "Succeeded or Failed" Apr 29 21:58:45.252: INFO: Pod "downwardapi-volume-bf0b1dbb-e8f7-45f5-8acf-c35030dec81b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.9938ms Apr 29 21:58:47.255: INFO: Pod "downwardapi-volume-bf0b1dbb-e8f7-45f5-8acf-c35030dec81b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00570995s Apr 29 21:58:49.259: INFO: Pod "downwardapi-volume-bf0b1dbb-e8f7-45f5-8acf-c35030dec81b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009780311s STEP: Saw pod success Apr 29 21:58:49.259: INFO: Pod "downwardapi-volume-bf0b1dbb-e8f7-45f5-8acf-c35030dec81b" satisfied condition "Succeeded or Failed" Apr 29 21:58:49.262: INFO: Trying to get logs from node node2 pod downwardapi-volume-bf0b1dbb-e8f7-45f5-8acf-c35030dec81b container client-container: STEP: delete the pod Apr 29 21:58:49.276: INFO: Waiting for pod downwardapi-volume-bf0b1dbb-e8f7-45f5-8acf-c35030dec81b to disappear Apr 29 21:58:49.278: INFO: Pod downwardapi-volume-bf0b1dbb-e8f7-45f5-8acf-c35030dec81b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:49.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9317" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":106,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:45.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 29 21:58:45.775: INFO: Waiting up to 5m0s for pod "pod-07a088cb-5381-44c2-801a-f03b0123b4bd" in namespace "emptydir-6386" to be "Succeeded or Failed" Apr 29 21:58:45.781: INFO: Pod "pod-07a088cb-5381-44c2-801a-f03b0123b4bd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.740665ms Apr 29 21:58:47.784: INFO: Pod "pod-07a088cb-5381-44c2-801a-f03b0123b4bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008366724s Apr 29 21:58:49.789: INFO: Pod "pod-07a088cb-5381-44c2-801a-f03b0123b4bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013595471s Apr 29 21:58:51.792: INFO: Pod "pod-07a088cb-5381-44c2-801a-f03b0123b4bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016548902s STEP: Saw pod success Apr 29 21:58:51.792: INFO: Pod "pod-07a088cb-5381-44c2-801a-f03b0123b4bd" satisfied condition "Succeeded or Failed" Apr 29 21:58:51.794: INFO: Trying to get logs from node node2 pod pod-07a088cb-5381-44c2-801a-f03b0123b4bd container test-container: STEP: delete the pod Apr 29 21:58:51.806: INFO: Waiting for pod pod-07a088cb-5381-44c2-801a-f03b0123b4bd to disappear Apr 29 21:58:51.808: INFO: Pod pod-07a088cb-5381-44c2-801a-f03b0123b4bd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:51.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6386" for this suite. • [SLOW TEST:6.071 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":100,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:47.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-d8f98111-a707-48c4-8a2a-b707a12b00c0 STEP: Creating a pod to test consume secrets Apr 29 21:58:47.876: INFO: Waiting up to 5m0s for pod "pod-secrets-6f6dcc4a-4205-4c97-a279-f74db33b0ade" in namespace "secrets-7038" to be "Succeeded or Failed" Apr 29 21:58:47.878: INFO: Pod "pod-secrets-6f6dcc4a-4205-4c97-a279-f74db33b0ade": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383125ms Apr 29 21:58:49.882: INFO: Pod "pod-secrets-6f6dcc4a-4205-4c97-a279-f74db33b0ade": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006167279s Apr 29 21:58:51.885: INFO: Pod "pod-secrets-6f6dcc4a-4205-4c97-a279-f74db33b0ade": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009188284s STEP: Saw pod success Apr 29 21:58:51.885: INFO: Pod "pod-secrets-6f6dcc4a-4205-4c97-a279-f74db33b0ade" satisfied condition "Succeeded or Failed" Apr 29 21:58:51.887: INFO: Trying to get logs from node node2 pod pod-secrets-6f6dcc4a-4205-4c97-a279-f74db33b0ade container secret-volume-test: STEP: delete the pod Apr 29 21:58:51.902: INFO: Waiting for pod pod-secrets-6f6dcc4a-4205-4c97-a279-f74db33b0ade to disappear Apr 29 21:58:51.904: INFO: Pod pod-secrets-6f6dcc4a-4205-4c97-a279-f74db33b0ade no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:51.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7038" for this suite. STEP: Destroying namespace "secret-namespace-7925" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:41.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Apr 29 21:58:41.920: INFO: The status of Pod labelsupdateb6741ce1-270c-49d0-8eb6-e9b64bf902eb is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:58:43.923: INFO: The status of Pod labelsupdateb6741ce1-270c-49d0-8eb6-e9b64bf902eb is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:58:45.923: INFO: The status of Pod labelsupdateb6741ce1-270c-49d0-8eb6-e9b64bf902eb is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:58:47.922: INFO: The status of Pod labelsupdateb6741ce1-270c-49d0-8eb6-e9b64bf902eb is Running (Ready = true) Apr 29 21:58:48.439: INFO: Successfully updated pod "labelsupdateb6741ce1-270c-49d0-8eb6-e9b64bf902eb" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:52.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4714" for this suite. • [SLOW TEST:10.592 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":43,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:49.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 21:58:49.338: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:55.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7503" for this suite. • [SLOW TEST:6.049 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":8,"skipped":122,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:55.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0429 21:58:55.396491 32 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching Apr 29 21:58:55.404: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Apr 29 21:58:55.407: INFO: starting watch STEP: patching STEP: updating Apr 29 21:58:55.421: INFO: waiting for watch events with expected annotations Apr 29 21:58:55.421: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:55.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7258" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":9,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:34.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Apr 29 21:58:34.292: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:58:36.296: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:58:38.295: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Apr 29 21:58:38.309: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:58:40.313: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:58:42.313: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 29 21:58:42.325: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 29 21:58:42.329: INFO: Pod pod-with-poststart-exec-hook still exists Apr 29 21:58:44.334: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 29 21:58:44.337: INFO: Pod pod-with-poststart-exec-hook still exists Apr 29 21:58:46.329: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 29 21:58:46.332: INFO: Pod pod-with-poststart-exec-hook still exists Apr 29 21:58:48.330: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 29 21:58:48.333: INFO: Pod pod-with-poststart-exec-hook still exists Apr 29 21:58:50.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 29 21:58:50.333: INFO: Pod pod-with-poststart-exec-hook still exists Apr 29 21:58:52.329: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 29 21:58:52.332: INFO: Pod pod-with-poststart-exec-hook still exists Apr 29 21:58:54.330: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 29 21:58:54.333: INFO: Pod pod-with-poststart-exec-hook still exists Apr 29 21:58:56.329: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 29 21:58:56.332: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:56.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3404" for this suite. • [SLOW TEST:22.089 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:09.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 29 21:58:09.290: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 29 21:58:30.415: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:58:39.055: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:58:59.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6660" for this suite. • [SLOW TEST:50.592 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:56.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-7f9f3c76-15f1-48bf-8858-05c35584a1fb STEP: Creating a pod to test consume secrets Apr 29 21:58:56.448: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bdf5493e-1bfb-450d-9f18-15d02815ece3" in namespace "projected-333" to be "Succeeded or Failed" Apr 29 21:58:56.454: INFO: Pod "pod-projected-secrets-bdf5493e-1bfb-450d-9f18-15d02815ece3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.302563ms Apr 29 21:58:58.457: INFO: Pod "pod-projected-secrets-bdf5493e-1bfb-450d-9f18-15d02815ece3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008764487s Apr 29 21:59:00.462: INFO: Pod "pod-projected-secrets-bdf5493e-1bfb-450d-9f18-15d02815ece3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014031956s STEP: Saw pod success Apr 29 21:59:00.462: INFO: Pod "pod-projected-secrets-bdf5493e-1bfb-450d-9f18-15d02815ece3" satisfied condition "Succeeded or Failed" Apr 29 21:59:00.465: INFO: Trying to get logs from node node2 pod pod-projected-secrets-bdf5493e-1bfb-450d-9f18-15d02815ece3 container projected-secret-volume-test: STEP: delete the pod Apr 29 21:59:00.476: INFO: Waiting for pod pod-projected-secrets-bdf5493e-1bfb-450d-9f18-15d02815ece3 to disappear Apr 29 21:59:00.478: INFO: Pod pod-projected-secrets-bdf5493e-1bfb-450d-9f18-15d02815ece3 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:00.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-333" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":278,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:57.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0429 21:57:57.109859 31 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:01.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-3398" for this suite. • [SLOW TEST:64.054 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":6,"skipped":192,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:00.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint STEP: mirroring an update to a custom Endpoint STEP: mirroring deletion of a custom Endpoint Apr 29 21:59:00.543: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:02.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-491" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":8,"skipped":282,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:01.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-cdeb9380-816a-4086-a1c1-0ca32662457d STEP: Creating a pod to test consume secrets Apr 29 21:59:01.181: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-580e7878-88e3-4182-bbd4-e152d318b997" in namespace "projected-1790" to be "Succeeded or Failed" Apr 29 21:59:01.182: INFO: Pod "pod-projected-secrets-580e7878-88e3-4182-bbd4-e152d318b997": Phase="Pending", Reason="", readiness=false. Elapsed: 1.80844ms Apr 29 21:59:03.185: INFO: Pod "pod-projected-secrets-580e7878-88e3-4182-bbd4-e152d318b997": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004485391s Apr 29 21:59:05.190: INFO: Pod "pod-projected-secrets-580e7878-88e3-4182-bbd4-e152d318b997": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00899245s STEP: Saw pod success Apr 29 21:59:05.190: INFO: Pod "pod-projected-secrets-580e7878-88e3-4182-bbd4-e152d318b997" satisfied condition "Succeeded or Failed" Apr 29 21:59:05.192: INFO: Trying to get logs from node node1 pod pod-projected-secrets-580e7878-88e3-4182-bbd4-e152d318b997 container secret-volume-test: STEP: delete the pod Apr 29 21:59:05.206: INFO: Waiting for pod pod-projected-secrets-580e7878-88e3-4182-bbd4-e152d318b997 to disappear Apr 29 21:59:05.208: INFO: Pod pod-projected-secrets-580e7878-88e3-4182-bbd4-e152d318b997 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:05.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1790" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":194,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":7,"skipped":124,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:59.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 21:58:59.894: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0cd7cc0a-7d11-4715-9daa-710176e7a586" in namespace "projected-4626" to be "Succeeded or Failed" Apr 29 21:58:59.897: INFO: Pod "downwardapi-volume-0cd7cc0a-7d11-4715-9daa-710176e7a586": Phase="Pending", Reason="", readiness=false. Elapsed: 2.59202ms Apr 29 21:59:01.902: INFO: Pod "downwardapi-volume-0cd7cc0a-7d11-4715-9daa-710176e7a586": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007575351s Apr 29 21:59:03.907: INFO: Pod "downwardapi-volume-0cd7cc0a-7d11-4715-9daa-710176e7a586": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012399465s Apr 29 21:59:05.911: INFO: Pod "downwardapi-volume-0cd7cc0a-7d11-4715-9daa-710176e7a586": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017136833s STEP: Saw pod success Apr 29 21:59:05.912: INFO: Pod "downwardapi-volume-0cd7cc0a-7d11-4715-9daa-710176e7a586" satisfied condition "Succeeded or Failed" Apr 29 21:59:05.914: INFO: Trying to get logs from node node2 pod downwardapi-volume-0cd7cc0a-7d11-4715-9daa-710176e7a586 container client-container: STEP: delete the pod Apr 29 21:59:05.928: INFO: Waiting for pod downwardapi-volume-0cd7cc0a-7d11-4715-9daa-710176e7a586 to disappear Apr 29 21:59:05.930: INFO: Pod downwardapi-volume-0cd7cc0a-7d11-4715-9daa-710176e7a586 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:05.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4626" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:02.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod Apr 29 21:59:02.634: INFO: The status of Pod pod-hostip-87808be5-e5b1-4ef6-9634-15569b6a12ae is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:04.637: INFO: The status of Pod pod-hostip-87808be5-e5b1-4ef6-9634-15569b6a12ae is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:06.641: INFO: The status of Pod pod-hostip-87808be5-e5b1-4ef6-9634-15569b6a12ae is Running (Ready = true) Apr 29 21:59:06.652: INFO: Pod pod-hostip-87808be5-e5b1-4ef6-9634-15569b6a12ae has hostIP: 10.10.190.207 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:06.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5470" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":303,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:55.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1565 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1565 I0429 21:58:55.667123 32 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1565, replica count: 2 I0429 21:58:58.717940 32 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 21:59:01.719091 32 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 21:59:01.719: INFO: Creating new exec pod Apr 29 21:59:10.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1565 exec execpodp5bfq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 29 21:59:11.009: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 29 21:59:11.009: INFO: stdout: "externalname-service-jfxn6" Apr 29 21:59:11.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1565 exec execpodp5bfq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.17.128 80' Apr 29 21:59:11.249: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.17.128 80\nConnection to 10.233.17.128 80 port [tcp/http] succeeded!\n" Apr 29 21:59:11.249: INFO: stdout: "externalname-service-kvdm2" Apr 29 21:59:11.249: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:11.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1565" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:15.638 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":10,"skipped":211,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:11.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Apr 29 21:59:11.321: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Apr 29 21:59:11.336: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:11.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-8452" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":11,"skipped":219,"failed":0} SSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:11.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions Apr 29 21:59:11.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5590 api-versions' Apr 29 21:59:11.530: INFO: stderr: "" Apr 29 21:59:11.530: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:11.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5590" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":12,"skipped":222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:52.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-957 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-957 Apr 29 21:58:52.044: INFO: Found 0 stateful pods, waiting for 1 Apr 29 21:59:02.048: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Apr 29 21:59:02.067: INFO: Deleting all statefulset in ns statefulset-957 Apr 29 21:59:02.069: INFO: Scaling statefulset ss to 0 Apr 29 21:59:12.081: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 21:59:12.083: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:12.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-957" for this suite. • [SLOW TEST:20.088 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":4,"skipped":79,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:06.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all Apr 29 21:59:06.698: INFO: Waiting up to 5m0s for pod "client-containers-0e958f63-8b9a-4314-bc36-89df8701639a" in namespace "containers-1182" to be "Succeeded or Failed" Apr 29 21:59:06.700: INFO: Pod "client-containers-0e958f63-8b9a-4314-bc36-89df8701639a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03757ms Apr 29 21:59:08.705: INFO: Pod "client-containers-0e958f63-8b9a-4314-bc36-89df8701639a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006716596s Apr 29 21:59:10.709: INFO: Pod "client-containers-0e958f63-8b9a-4314-bc36-89df8701639a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010291215s Apr 29 21:59:12.712: INFO: Pod "client-containers-0e958f63-8b9a-4314-bc36-89df8701639a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013508235s STEP: Saw pod success Apr 29 21:59:12.712: INFO: Pod "client-containers-0e958f63-8b9a-4314-bc36-89df8701639a" satisfied condition "Succeeded or Failed" Apr 29 21:59:12.715: INFO: Trying to get logs from node node2 pod client-containers-0e958f63-8b9a-4314-bc36-89df8701639a container agnhost-container: STEP: delete the pod Apr 29 21:59:12.789: INFO: Waiting for pod client-containers-0e958f63-8b9a-4314-bc36-89df8701639a to disappear Apr 29 21:59:12.791: INFO: Pod client-containers-0e958f63-8b9a-4314-bc36-89df8701639a no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:12.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1182" for this suite. • [SLOW TEST:6.130 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":304,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:06.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod Apr 29 21:59:06.059: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:08.062: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:10.063: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Apr 29 21:59:10.078: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:12.084: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:14.085: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 29 21:59:14.088: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4485 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:14.088: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:59:14.274: INFO: Exec stderr: "" Apr 29 21:59:14.274: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4485 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:14.274: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:59:14.368: INFO: Exec stderr: "" Apr 29 21:59:14.368: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4485 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:14.368: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:59:14.447: INFO: Exec stderr: "" Apr 29 21:59:14.447: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4485 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:14.447: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:59:14.531: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 29 21:59:14.531: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4485 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:14.531: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:59:14.617: INFO: Exec stderr: "" Apr 29 21:59:14.617: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4485 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:14.617: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:59:14.802: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 29 21:59:14.802: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4485 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:14.802: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:59:14.890: INFO: Exec stderr: "" Apr 29 21:59:14.890: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4485 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:14.890: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:59:14.984: INFO: Exec stderr: "" Apr 29 21:59:14.984: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4485 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:14.984: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:59:15.063: INFO: Exec stderr: "" Apr 29 21:59:15.063: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4485 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:15.063: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:59:15.146: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:15.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4485" for this suite. • [SLOW TEST:9.133 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":162,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:51.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-56s4 STEP: Creating a pod to test atomic-volume-subpath Apr 29 21:58:51.893: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-56s4" in namespace "subpath-178" to be "Succeeded or Failed" Apr 29 21:58:51.895: INFO: Pod "pod-subpath-test-projected-56s4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157465ms Apr 29 21:58:53.899: INFO: Pod "pod-subpath-test-projected-56s4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006045369s Apr 29 21:58:55.901: INFO: Pod "pod-subpath-test-projected-56s4": Phase="Running", Reason="", readiness=true. Elapsed: 4.008237705s Apr 29 21:58:57.904: INFO: Pod "pod-subpath-test-projected-56s4": Phase="Running", Reason="", readiness=true. Elapsed: 6.010892883s Apr 29 21:58:59.910: INFO: Pod "pod-subpath-test-projected-56s4": Phase="Running", Reason="", readiness=true. Elapsed: 8.017731096s Apr 29 21:59:01.914: INFO: Pod "pod-subpath-test-projected-56s4": Phase="Running", Reason="", readiness=true. Elapsed: 10.02167018s Apr 29 21:59:03.918: INFO: Pod "pod-subpath-test-projected-56s4": Phase="Running", Reason="", readiness=true. Elapsed: 12.025131301s Apr 29 21:59:05.923: INFO: Pod "pod-subpath-test-projected-56s4": Phase="Running", Reason="", readiness=true. Elapsed: 14.029883198s Apr 29 21:59:07.926: INFO: Pod "pod-subpath-test-projected-56s4": Phase="Running", Reason="", readiness=true. Elapsed: 16.033340811s Apr 29 21:59:09.932: INFO: Pod "pod-subpath-test-projected-56s4": Phase="Running", Reason="", readiness=true. Elapsed: 18.038936653s Apr 29 21:59:11.936: INFO: Pod "pod-subpath-test-projected-56s4": Phase="Running", Reason="", readiness=true. Elapsed: 20.043002062s Apr 29 21:59:13.940: INFO: Pod "pod-subpath-test-projected-56s4": Phase="Running", Reason="", readiness=true. Elapsed: 22.04725666s Apr 29 21:59:15.943: INFO: Pod "pod-subpath-test-projected-56s4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.050723762s STEP: Saw pod success Apr 29 21:59:15.943: INFO: Pod "pod-subpath-test-projected-56s4" satisfied condition "Succeeded or Failed" Apr 29 21:59:15.946: INFO: Trying to get logs from node node1 pod pod-subpath-test-projected-56s4 container test-container-subpath-projected-56s4: STEP: delete the pod Apr 29 21:59:15.959: INFO: Waiting for pod pod-subpath-test-projected-56s4 to disappear Apr 29 21:59:15.961: INFO: Pod pod-subpath-test-projected-56s4 no longer exists STEP: Deleting pod pod-subpath-test-projected-56s4 Apr 29 21:59:15.961: INFO: Deleting pod "pod-subpath-test-projected-56s4" in namespace "subpath-178" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:15.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-178" for this suite. • [SLOW TEST:24.117 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":118,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:12.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 21:59:12.853: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4084db18-64b9-4f97-b26c-38618d57f68a" in namespace "downward-api-9318" to be "Succeeded or Failed" Apr 29 21:59:12.855: INFO: Pod "downwardapi-volume-4084db18-64b9-4f97-b26c-38618d57f68a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286238ms Apr 29 21:59:14.860: INFO: Pod "downwardapi-volume-4084db18-64b9-4f97-b26c-38618d57f68a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006622763s Apr 29 21:59:16.863: INFO: Pod "downwardapi-volume-4084db18-64b9-4f97-b26c-38618d57f68a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010355827s STEP: Saw pod success Apr 29 21:59:16.864: INFO: Pod "downwardapi-volume-4084db18-64b9-4f97-b26c-38618d57f68a" satisfied condition "Succeeded or Failed" Apr 29 21:59:16.866: INFO: Trying to get logs from node node1 pod downwardapi-volume-4084db18-64b9-4f97-b26c-38618d57f68a container client-container: STEP: delete the pod Apr 29 21:59:16.879: INFO: Waiting for pod downwardapi-volume-4084db18-64b9-4f97-b26c-38618d57f68a to disappear Apr 29 21:59:16.881: INFO: Pod downwardapi-volume-4084db18-64b9-4f97-b26c-38618d57f68a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:16.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9318" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":314,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:52.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-jfmv STEP: Creating a pod to test atomic-volume-subpath Apr 29 21:58:52.641: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jfmv" in namespace "subpath-7580" to be "Succeeded or Failed" Apr 29 21:58:52.644: INFO: Pod "pod-subpath-test-configmap-jfmv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474269ms Apr 29 21:58:54.647: INFO: Pod "pod-subpath-test-configmap-jfmv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00593117s Apr 29 21:58:56.651: INFO: Pod "pod-subpath-test-configmap-jfmv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010156056s Apr 29 21:58:58.655: INFO: Pod "pod-subpath-test-configmap-jfmv": Phase="Running", Reason="", readiness=true. Elapsed: 6.014320682s Apr 29 21:59:00.661: INFO: Pod "pod-subpath-test-configmap-jfmv": Phase="Running", Reason="", readiness=true. Elapsed: 8.020017038s Apr 29 21:59:02.664: INFO: Pod "pod-subpath-test-configmap-jfmv": Phase="Running", Reason="", readiness=true. Elapsed: 10.023111575s Apr 29 21:59:04.669: INFO: Pod "pod-subpath-test-configmap-jfmv": Phase="Running", Reason="", readiness=true. Elapsed: 12.027399451s Apr 29 21:59:06.672: INFO: Pod "pod-subpath-test-configmap-jfmv": Phase="Running", Reason="", readiness=true. Elapsed: 14.031175575s Apr 29 21:59:08.679: INFO: Pod "pod-subpath-test-configmap-jfmv": Phase="Running", Reason="", readiness=true. Elapsed: 16.037674818s Apr 29 21:59:10.682: INFO: Pod "pod-subpath-test-configmap-jfmv": Phase="Running", Reason="", readiness=true. Elapsed: 18.040842565s Apr 29 21:59:12.685: INFO: Pod "pod-subpath-test-configmap-jfmv": Phase="Running", Reason="", readiness=true. Elapsed: 20.044013304s Apr 29 21:59:14.689: INFO: Pod "pod-subpath-test-configmap-jfmv": Phase="Running", Reason="", readiness=true. Elapsed: 22.047800728s Apr 29 21:59:16.693: INFO: Pod "pod-subpath-test-configmap-jfmv": Phase="Running", Reason="", readiness=true. Elapsed: 24.052339667s Apr 29 21:59:18.697: INFO: Pod "pod-subpath-test-configmap-jfmv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.055453971s STEP: Saw pod success Apr 29 21:59:18.697: INFO: Pod "pod-subpath-test-configmap-jfmv" satisfied condition "Succeeded or Failed" Apr 29 21:59:18.699: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-jfmv container test-container-subpath-configmap-jfmv: STEP: delete the pod Apr 29 21:59:18.714: INFO: Waiting for pod pod-subpath-test-configmap-jfmv to disappear Apr 29 21:59:18.716: INFO: Pod pod-subpath-test-configmap-jfmv no longer exists STEP: Deleting pod pod-subpath-test-configmap-jfmv Apr 29 21:59:18.716: INFO: Deleting pod "pod-subpath-test-configmap-jfmv" in namespace "subpath-7580" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:18.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7580" for this suite. • [SLOW TEST:26.122 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:16.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 21:59:16.927: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b50526b-7b93-4852-bbe5-d4450e7240a3" in namespace "projected-8454" to be "Succeeded or Failed" Apr 29 21:59:16.929: INFO: Pod "downwardapi-volume-8b50526b-7b93-4852-bbe5-d4450e7240a3": Phase="Pending", Reason="", readiness=false. Elapsed: 1.844523ms Apr 29 21:59:18.932: INFO: Pod "downwardapi-volume-8b50526b-7b93-4852-bbe5-d4450e7240a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004681981s Apr 29 21:59:20.935: INFO: Pod "downwardapi-volume-8b50526b-7b93-4852-bbe5-d4450e7240a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008026456s STEP: Saw pod success Apr 29 21:59:20.935: INFO: Pod "downwardapi-volume-8b50526b-7b93-4852-bbe5-d4450e7240a3" satisfied condition "Succeeded or Failed" Apr 29 21:59:20.938: INFO: Trying to get logs from node node2 pod downwardapi-volume-8b50526b-7b93-4852-bbe5-d4450e7240a3 container client-container: STEP: delete the pod Apr 29 21:59:20.950: INFO: Waiting for pod downwardapi-volume-8b50526b-7b93-4852-bbe5-d4450e7240a3 to disappear Apr 29 21:59:20.952: INFO: Pod downwardapi-volume-8b50526b-7b93-4852-bbe5-d4450e7240a3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:20.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8454" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":316,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:18.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:18.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-2477 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:24.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-5940" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:24.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2477" for this suite. • [SLOW TEST:6.100 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75 should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":9,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:05.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Apr 29 21:59:05.282: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:07.286: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:09.290: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:11.286: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Apr 29 21:59:11.300: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:13.304: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:15.305: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Apr 29 21:59:15.313: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 29 21:59:15.315: INFO: Pod pod-with-prestop-http-hook still exists Apr 29 21:59:17.316: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 29 21:59:17.319: INFO: Pod pod-with-prestop-http-hook still exists Apr 29 21:59:19.318: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 29 21:59:19.320: INFO: Pod pod-with-prestop-http-hook still exists Apr 29 21:59:21.317: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 29 21:59:21.319: INFO: Pod pod-with-prestop-http-hook still exists Apr 29 21:59:23.318: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 29 21:59:23.320: INFO: Pod pod-with-prestop-http-hook still exists Apr 29 21:59:25.317: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 29 21:59:25.320: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:25.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6715" for this suite. • [SLOW TEST:20.710 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":210,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:25.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 21:59:26.027: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-d80f76eb-7a35-4f96-b90c-247c0cfb8634" in namespace "security-context-test-1643" to be "Succeeded or Failed" Apr 29 21:59:26.029: INFO: Pod "busybox-readonly-false-d80f76eb-7a35-4f96-b90c-247c0cfb8634": Phase="Pending", Reason="", readiness=false. Elapsed: 1.946972ms Apr 29 21:59:28.033: INFO: Pod "busybox-readonly-false-d80f76eb-7a35-4f96-b90c-247c0cfb8634": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006058075s Apr 29 21:59:30.037: INFO: Pod "busybox-readonly-false-d80f76eb-7a35-4f96-b90c-247c0cfb8634": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010092247s Apr 29 21:59:30.037: INFO: Pod "busybox-readonly-false-d80f76eb-7a35-4f96-b90c-247c0cfb8634" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:30.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1643" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":224,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:25.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 21:59:25.544: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 21:59:27.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866365, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866365, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866365, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866365, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 21:59:30.564: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:30.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3776" for this suite. STEP: Destroying namespace "webhook-3776-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.623 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":10,"skipped":210,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:30.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 21:59:30.090: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 29 21:59:32.112: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:33.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1173" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":10,"skipped":236,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:21.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Apr 29 21:59:21.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8663 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Apr 29 21:59:21.199: INFO: stderr: "" Apr 29 21:59:21.199: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 29 21:59:26.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8663 get pod e2e-test-httpd-pod -o json' Apr 29 21:59:26.429: INFO: stderr: "" Apr 29 21:59:26.429: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.37\\\"\\n ],\\n \\\"mac\\\": \\\"a2:3b:83:0f:a3:d4\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.37\\\"\\n ],\\n \\\"mac\\\": \\\"a2:3b:83:0f:a3:d4\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2022-04-29T21:59:21Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8663\",\n \"resourceVersion\": \"34529\",\n \"uid\": \"17760477-b541-4927-8602-09aa91f491f3\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-trslb\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-trslb\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-29T21:59:21Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-29T21:59:23Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-29T21:59:23Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-29T21:59:21Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://011fa3ddfc59e45c5bb67a94ce2b66cdfa96102272baa7710ced4294b1a54759\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-04-29T21:59:23Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.208\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.4.37\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.4.37\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-04-29T21:59:21Z\"\n }\n}\n" STEP: replace the image in the pod Apr 29 21:59:26.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8663 replace -f -' Apr 29 21:59:26.797: INFO: stderr: "" Apr 29 21:59:26.797: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 Apr 29 21:59:26.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8663 delete pods e2e-test-httpd-pod' Apr 29 21:59:35.644: INFO: stderr: "" Apr 29 21:59:35.644: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:35.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8663" for this suite. • [SLOW TEST:14.630 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":13,"skipped":345,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:30.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-9a1bef46-b5d0-44c6-8bf6-4defd207d97d STEP: Creating a pod to test consume configMaps Apr 29 21:59:30.720: INFO: Waiting up to 5m0s for pod "pod-configmaps-cf7e81c6-f46e-4e8b-89b0-a9e98fa5ef87" in namespace "configmap-9561" to be "Succeeded or Failed" Apr 29 21:59:30.722: INFO: Pod "pod-configmaps-cf7e81c6-f46e-4e8b-89b0-a9e98fa5ef87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.002257ms Apr 29 21:59:32.725: INFO: Pod "pod-configmaps-cf7e81c6-f46e-4e8b-89b0-a9e98fa5ef87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005503209s Apr 29 21:59:34.730: INFO: Pod "pod-configmaps-cf7e81c6-f46e-4e8b-89b0-a9e98fa5ef87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009882306s Apr 29 21:59:36.735: INFO: Pod "pod-configmaps-cf7e81c6-f46e-4e8b-89b0-a9e98fa5ef87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014858565s STEP: Saw pod success Apr 29 21:59:36.735: INFO: Pod "pod-configmaps-cf7e81c6-f46e-4e8b-89b0-a9e98fa5ef87" satisfied condition "Succeeded or Failed" Apr 29 21:59:36.737: INFO: Trying to get logs from node node2 pod pod-configmaps-cf7e81c6-f46e-4e8b-89b0-a9e98fa5ef87 container configmap-volume-test: STEP: delete the pod Apr 29 21:59:36.754: INFO: Waiting for pod pod-configmaps-cf7e81c6-f46e-4e8b-89b0-a9e98fa5ef87 to disappear Apr 29 21:59:36.756: INFO: Pod pod-configmaps-cf7e81c6-f46e-4e8b-89b0-a9e98fa5ef87 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:36.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9561" for this suite. • [SLOW TEST:6.086 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":222,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:11.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 29 21:59:11.650: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:59:20.229: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:39.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2482" for this suite. • [SLOW TEST:27.794 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":13,"skipped":265,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:35.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 21:59:35.695: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e06864f5-4cac-4959-959c-4a27180615e8" in namespace "projected-7996" to be "Succeeded or Failed" Apr 29 21:59:35.697: INFO: Pod "downwardapi-volume-e06864f5-4cac-4959-959c-4a27180615e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153283ms Apr 29 21:59:37.701: INFO: Pod "downwardapi-volume-e06864f5-4cac-4959-959c-4a27180615e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005903865s Apr 29 21:59:39.704: INFO: Pod "downwardapi-volume-e06864f5-4cac-4959-959c-4a27180615e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009199233s Apr 29 21:59:41.707: INFO: Pod "downwardapi-volume-e06864f5-4cac-4959-959c-4a27180615e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012338141s STEP: Saw pod success Apr 29 21:59:41.707: INFO: Pod "downwardapi-volume-e06864f5-4cac-4959-959c-4a27180615e8" satisfied condition "Succeeded or Failed" Apr 29 21:59:41.711: INFO: Trying to get logs from node node2 pod downwardapi-volume-e06864f5-4cac-4959-959c-4a27180615e8 container client-container: STEP: delete the pod Apr 29 21:59:41.762: INFO: Waiting for pod downwardapi-volume-e06864f5-4cac-4959-959c-4a27180615e8 to disappear Apr 29 21:59:41.765: INFO: Pod downwardapi-volume-e06864f5-4cac-4959-959c-4a27180615e8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:41.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7996" for this suite. • [SLOW TEST:6.107 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":349,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:36.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 21:59:36.851: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4cb1f3d7-27a8-40cc-9113-41ba07765434" in namespace "downward-api-8907" to be "Succeeded or Failed" Apr 29 21:59:36.854: INFO: Pod "downwardapi-volume-4cb1f3d7-27a8-40cc-9113-41ba07765434": Phase="Pending", Reason="", readiness=false. Elapsed: 2.422126ms Apr 29 21:59:38.858: INFO: Pod "downwardapi-volume-4cb1f3d7-27a8-40cc-9113-41ba07765434": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006379222s Apr 29 21:59:40.861: INFO: Pod "downwardapi-volume-4cb1f3d7-27a8-40cc-9113-41ba07765434": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009196254s Apr 29 21:59:42.865: INFO: Pod "downwardapi-volume-4cb1f3d7-27a8-40cc-9113-41ba07765434": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013986174s STEP: Saw pod success Apr 29 21:59:42.866: INFO: Pod "downwardapi-volume-4cb1f3d7-27a8-40cc-9113-41ba07765434" satisfied condition "Succeeded or Failed" Apr 29 21:59:42.869: INFO: Trying to get logs from node node2 pod downwardapi-volume-4cb1f3d7-27a8-40cc-9113-41ba07765434 container client-container: STEP: delete the pod Apr 29 21:59:42.883: INFO: Waiting for pod downwardapi-volume-4cb1f3d7-27a8-40cc-9113-41ba07765434 to disappear Apr 29 21:59:42.885: INFO: Pod downwardapi-volume-4cb1f3d7-27a8-40cc-9113-41ba07765434 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:42.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8907" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":243,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:39.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 21:59:39.537: INFO: The status of Pod pod-secrets-46c88f8a-4ffe-4167-8a04-0e5b0806e044 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:41.540: INFO: The status of Pod pod-secrets-46c88f8a-4ffe-4167-8a04-0e5b0806e044 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:43.544: INFO: The status of Pod pod-secrets-46c88f8a-4ffe-4167-8a04-0e5b0806e044 is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:43.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7328" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":14,"skipped":310,"failed":0} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:33.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Apr 29 21:59:33.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9751 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Apr 29 21:59:33.329: INFO: stderr: "" Apr 29 21:59:33.329: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Apr 29 21:59:33.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9751 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' Apr 29 21:59:33.742: INFO: stderr: "" Apr 29 21:59:33.742: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Apr 29 21:59:33.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9751 delete pods e2e-test-httpd-pod' Apr 29 21:59:45.218: INFO: stderr: "" Apr 29 21:59:45.219: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:45.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9751" for this suite. • [SLOW TEST:12.072 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":11,"skipped":247,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:15.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-6742 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 29 21:59:15.999: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 29 21:59:16.032: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:18.036: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:20.036: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 21:59:22.035: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 21:59:24.038: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 21:59:26.035: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 21:59:28.034: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 21:59:30.036: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 21:59:32.037: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 21:59:34.037: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 21:59:36.036: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 29 21:59:36.041: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 29 21:59:38.046: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 29 21:59:40.047: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 29 21:59:48.071: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Apr 29 21:59:48.071: INFO: Breadth first check of 10.244.3.139 on host 10.10.190.207... Apr 29 21:59:48.073: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.48:9080/dial?request=hostname&protocol=udp&host=10.244.3.139&port=8081&tries=1'] Namespace:pod-network-test-6742 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:48.073: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:59:48.393: INFO: Waiting for responses: map[] Apr 29 21:59:48.393: INFO: reached 10.244.3.139 after 0/1 tries Apr 29 21:59:48.393: INFO: Breadth first check of 10.244.4.35 on host 10.10.190.208... Apr 29 21:59:48.396: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.48:9080/dial?request=hostname&protocol=udp&host=10.244.4.35&port=8081&tries=1'] Namespace:pod-network-test-6742 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:48.396: INFO: >>> kubeConfig: /root/.kube/config Apr 29 21:59:48.505: INFO: Waiting for responses: map[] Apr 29 21:59:48.505: INFO: reached 10.244.4.35 after 0/1 tries Apr 29 21:59:48.505: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:48.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6742" for this suite. • [SLOW TEST:32.533 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":120,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:22.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-1137 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1137 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1137 Apr 29 21:58:22.428: INFO: Found 0 stateful pods, waiting for 1 Apr 29 21:58:32.432: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 29 21:58:32.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1137 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 21:58:32.717: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 29 21:58:32.717: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 21:58:32.717: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 21:58:32.720: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 29 21:58:42.724: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 29 21:58:42.724: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 21:58:42.738: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999473s Apr 29 21:58:43.740: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997578366s Apr 29 21:58:44.745: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.994491749s Apr 29 21:58:45.748: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.990090556s Apr 29 21:58:46.752: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.987422284s Apr 29 21:58:47.758: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.983340093s Apr 29 21:58:48.760: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.977826743s Apr 29 21:58:49.764: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.975183186s Apr 29 21:58:50.767: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.970809413s Apr 29 21:58:51.770: INFO: Verifying statefulset ss doesn't scale past 1 for another 967.494982ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1137 Apr 29 21:58:52.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1137 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 21:58:53.075: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 29 21:58:53.075: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 21:58:53.076: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 21:58:53.078: INFO: Found 1 stateful pods, waiting for 3 Apr 29 21:59:03.083: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 29 21:59:03.083: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 29 21:59:03.083: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 29 21:59:03.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1137 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 21:59:03.437: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 29 21:59:03.437: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 21:59:03.437: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 21:59:03.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1137 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 21:59:04.017: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 29 21:59:04.017: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 21:59:04.017: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 21:59:04.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1137 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 21:59:04.438: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 29 21:59:04.438: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 21:59:04.438: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 21:59:04.438: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 21:59:04.441: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 29 21:59:14.449: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 29 21:59:14.449: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 29 21:59:14.449: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 29 21:59:14.460: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999467s Apr 29 21:59:15.465: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996299064s Apr 29 21:59:16.469: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990278253s Apr 29 21:59:17.474: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.986101001s Apr 29 21:59:18.478: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982612035s Apr 29 21:59:19.481: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.978397606s Apr 29 21:59:20.485: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.974993146s Apr 29 21:59:21.488: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.971110705s Apr 29 21:59:22.492: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.967063708s Apr 29 21:59:23.496: INFO: Verifying statefulset ss doesn't scale past 3 for another 963.697291ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1137 Apr 29 21:59:24.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1137 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 21:59:24.742: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 29 21:59:24.742: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 21:59:24.742: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 21:59:24.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1137 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 21:59:24.993: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 29 21:59:24.993: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 21:59:24.993: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 21:59:24.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1137 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 21:59:25.244: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 29 21:59:25.244: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 21:59:25.244: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 21:59:25.244: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Apr 29 21:59:55.258: INFO: Deleting all statefulset in ns statefulset-1137 Apr 29 21:59:55.260: INFO: Scaling statefulset ss to 0 Apr 29 21:59:55.268: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 21:59:55.270: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:55.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1137" for this suite. • [SLOW TEST:92.889 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":5,"skipped":158,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:41.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 21:59:49.825: INFO: Deleting pod "var-expansion-2f947dc9-55c0-4c0d-ac4f-10d78716c43c" in namespace "var-expansion-4288" Apr 29 21:59:49.829: INFO: Wait up to 5m0s for pod "var-expansion-2f947dc9-55c0-4c0d-ac4f-10d78716c43c" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:55.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4288" for this suite. • [SLOW TEST:14.057 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":15,"skipped":354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:55.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 21:59:59.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-207" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":16,"skipped":393,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:43.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Apr 29 21:59:43.619: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:45.622: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:47.623: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled Apr 29 21:59:47.635: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:49.641: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:51.639: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides Apr 29 21:59:51.651: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:53.659: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:55.654: INFO: The status of Pod pod3 is Running (Ready = true) Apr 29 21:59:55.666: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:57.670: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Apr 29 21:59:57.673: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.10.190.207 http://127.0.0.1:54323/hostname] Namespace:hostport-6304 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:57.673: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 Apr 29 21:59:57.765: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.10.190.207:54323/hostname] Namespace:hostport-6304 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:57.765: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 UDP Apr 29 21:59:57.851: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.10.190.207 54323] Namespace:hostport-6304 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 21:59:57.851: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:02.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-6304" for this suite. • [SLOW TEST:19.363 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":312,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:55.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 21:59:55.337: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3703 I0429 21:59:55.357803 35 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3703, replica count: 1 I0429 21:59:56.409089 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 21:59:57.409629 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 21:59:58.410091 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 21:59:58.517: INFO: Created: latency-svc-k98t4 Apr 29 21:59:58.522: INFO: Got endpoints: latency-svc-k98t4 [11.993934ms] Apr 29 21:59:58.528: INFO: Created: latency-svc-nbmmz Apr 29 21:59:58.530: INFO: Got endpoints: latency-svc-nbmmz [7.431846ms] Apr 29 21:59:58.531: INFO: Created: latency-svc-cx24v Apr 29 21:59:58.533: INFO: Got endpoints: latency-svc-cx24v [10.892284ms] Apr 29 21:59:58.533: INFO: Created: latency-svc-4trj8 Apr 29 21:59:58.536: INFO: Got endpoints: latency-svc-4trj8 [13.331041ms] Apr 29 21:59:58.536: INFO: Created: latency-svc-pnkjg Apr 29 21:59:58.538: INFO: Created: latency-svc-xx6dn Apr 29 21:59:58.539: INFO: Got endpoints: latency-svc-pnkjg [16.040744ms] Apr 29 21:59:58.541: INFO: Got endpoints: latency-svc-xx6dn [18.227798ms] Apr 29 21:59:58.542: INFO: Created: latency-svc-9xwfk Apr 29 21:59:58.543: INFO: Got endpoints: latency-svc-9xwfk [20.675591ms] Apr 29 21:59:58.544: INFO: Created: latency-svc-tqfw7 Apr 29 21:59:58.546: INFO: Got endpoints: latency-svc-tqfw7 [23.758753ms] Apr 29 21:59:58.547: INFO: Created: latency-svc-td5j7 Apr 29 21:59:58.549: INFO: Got endpoints: latency-svc-td5j7 [26.147123ms] Apr 29 21:59:58.550: INFO: Created: latency-svc-27br2 Apr 29 21:59:58.552: INFO: Created: latency-svc-2wz79 Apr 29 21:59:58.552: INFO: Got endpoints: latency-svc-27br2 [29.881746ms] Apr 29 21:59:58.554: INFO: Got endpoints: latency-svc-2wz79 [30.671353ms] Apr 29 21:59:58.555: INFO: Created: latency-svc-chcvc Apr 29 21:59:58.558: INFO: Got endpoints: latency-svc-chcvc [34.838828ms] Apr 29 21:59:58.558: INFO: Created: latency-svc-5htjt Apr 29 21:59:58.560: INFO: Created: latency-svc-68cjm Apr 29 21:59:58.560: INFO: Got endpoints: latency-svc-5htjt [37.180598ms] Apr 29 21:59:58.562: INFO: Got endpoints: latency-svc-68cjm [38.786603ms] Apr 29 21:59:58.563: INFO: Created: latency-svc-cz6dz Apr 29 21:59:58.564: INFO: Got endpoints: latency-svc-cz6dz [41.375014ms] Apr 29 21:59:58.565: INFO: Created: latency-svc-tpv7l Apr 29 21:59:58.567: INFO: Got endpoints: latency-svc-tpv7l [44.220956ms] Apr 29 21:59:58.568: INFO: Created: latency-svc-zgmpf Apr 29 21:59:58.570: INFO: Got endpoints: latency-svc-zgmpf [40.182032ms] Apr 29 21:59:58.571: INFO: Created: latency-svc-h4ks5 Apr 29 21:59:58.573: INFO: Got endpoints: latency-svc-h4ks5 [39.336173ms] Apr 29 21:59:58.573: INFO: Created: latency-svc-f7zs2 Apr 29 21:59:58.575: INFO: Got endpoints: latency-svc-f7zs2 [39.578853ms] Apr 29 21:59:58.576: INFO: Created: latency-svc-vqr5n Apr 29 21:59:58.579: INFO: Got endpoints: latency-svc-vqr5n [39.922169ms] Apr 29 21:59:58.579: INFO: Created: latency-svc-cmnfz Apr 29 21:59:58.582: INFO: Got endpoints: latency-svc-cmnfz [40.490026ms] Apr 29 21:59:58.582: INFO: Created: latency-svc-8scwc Apr 29 21:59:58.584: INFO: Got endpoints: latency-svc-8scwc [40.725834ms] Apr 29 21:59:58.585: INFO: Created: latency-svc-cd529 Apr 29 21:59:58.587: INFO: Got endpoints: latency-svc-cd529 [40.631461ms] Apr 29 21:59:58.588: INFO: Created: latency-svc-k8l29 Apr 29 21:59:58.590: INFO: Got endpoints: latency-svc-k8l29 [41.121549ms] Apr 29 21:59:58.591: INFO: Created: latency-svc-hf2cr Apr 29 21:59:58.592: INFO: Got endpoints: latency-svc-hf2cr [39.649464ms] Apr 29 21:59:58.593: INFO: Created: latency-svc-bzvdf Apr 29 21:59:58.596: INFO: Got endpoints: latency-svc-bzvdf [41.90506ms] Apr 29 21:59:58.596: INFO: Created: latency-svc-fl5zr Apr 29 21:59:58.599: INFO: Got endpoints: latency-svc-fl5zr [41.525439ms] Apr 29 21:59:58.599: INFO: Created: latency-svc-r8gfh Apr 29 21:59:58.601: INFO: Got endpoints: latency-svc-r8gfh [41.027078ms] Apr 29 21:59:58.602: INFO: Created: latency-svc-xwt5d Apr 29 21:59:58.604: INFO: Got endpoints: latency-svc-xwt5d [42.394332ms] Apr 29 21:59:58.605: INFO: Created: latency-svc-ck82w Apr 29 21:59:58.607: INFO: Got endpoints: latency-svc-ck82w [42.880922ms] Apr 29 21:59:58.608: INFO: Created: latency-svc-s2pds Apr 29 21:59:58.611: INFO: Created: latency-svc-dq8qf Apr 29 21:59:58.613: INFO: Got endpoints: latency-svc-s2pds [45.310996ms] Apr 29 21:59:58.615: INFO: Created: latency-svc-f5gw5 Apr 29 21:59:58.618: INFO: Created: latency-svc-dl8xf Apr 29 21:59:58.619: INFO: Got endpoints: latency-svc-dq8qf [49.440934ms] Apr 29 21:59:58.621: INFO: Created: latency-svc-s8v2l Apr 29 21:59:58.624: INFO: Created: latency-svc-sbdh5 Apr 29 21:59:58.627: INFO: Created: latency-svc-svt74 Apr 29 21:59:58.630: INFO: Created: latency-svc-cnjmr Apr 29 21:59:58.633: INFO: Created: latency-svc-jl4jf Apr 29 21:59:58.635: INFO: Created: latency-svc-bcfz2 Apr 29 21:59:58.638: INFO: Created: latency-svc-xcknl Apr 29 21:59:58.640: INFO: Created: latency-svc-mzr77 Apr 29 21:59:58.644: INFO: Created: latency-svc-jwrhp Apr 29 21:59:58.645: INFO: Created: latency-svc-v5x5g Apr 29 21:59:58.648: INFO: Created: latency-svc-rxqrt Apr 29 21:59:58.651: INFO: Created: latency-svc-mdd2h Apr 29 21:59:58.657: INFO: Created: latency-svc-4vnvz Apr 29 21:59:58.670: INFO: Got endpoints: latency-svc-f5gw5 [97.147058ms] Apr 29 21:59:58.675: INFO: Created: latency-svc-zfxgv Apr 29 21:59:58.721: INFO: Got endpoints: latency-svc-dl8xf [145.645322ms] Apr 29 21:59:58.726: INFO: Created: latency-svc-8kzqz Apr 29 21:59:58.770: INFO: Got endpoints: latency-svc-s8v2l [191.762132ms] Apr 29 21:59:58.776: INFO: Created: latency-svc-gwkqp Apr 29 21:59:58.820: INFO: Got endpoints: latency-svc-sbdh5 [238.43748ms] Apr 29 21:59:58.825: INFO: Created: latency-svc-z7vpb Apr 29 21:59:58.870: INFO: Got endpoints: latency-svc-svt74 [285.611451ms] Apr 29 21:59:58.877: INFO: Created: latency-svc-prt9g Apr 29 21:59:58.920: INFO: Got endpoints: latency-svc-cnjmr [333.110146ms] Apr 29 21:59:58.926: INFO: Created: latency-svc-5z7hp Apr 29 21:59:58.970: INFO: Got endpoints: latency-svc-jl4jf [380.26312ms] Apr 29 21:59:58.977: INFO: Created: latency-svc-gmptn Apr 29 21:59:59.021: INFO: Got endpoints: latency-svc-bcfz2 [428.952086ms] Apr 29 21:59:59.029: INFO: Created: latency-svc-c8z8s Apr 29 21:59:59.071: INFO: Got endpoints: latency-svc-xcknl [475.437875ms] Apr 29 21:59:59.077: INFO: Created: latency-svc-hgvhp Apr 29 21:59:59.119: INFO: Got endpoints: latency-svc-mzr77 [520.045797ms] Apr 29 21:59:59.126: INFO: Created: latency-svc-8lg5w Apr 29 21:59:59.169: INFO: Got endpoints: latency-svc-jwrhp [568.214608ms] Apr 29 21:59:59.174: INFO: Created: latency-svc-qdhg9 Apr 29 21:59:59.220: INFO: Got endpoints: latency-svc-v5x5g [616.288924ms] Apr 29 21:59:59.228: INFO: Created: latency-svc-nj945 Apr 29 21:59:59.270: INFO: Got endpoints: latency-svc-rxqrt [663.120802ms] Apr 29 21:59:59.277: INFO: Created: latency-svc-xqdhf Apr 29 21:59:59.320: INFO: Got endpoints: latency-svc-mdd2h [707.602285ms] Apr 29 21:59:59.325: INFO: Created: latency-svc-m7tkp Apr 29 21:59:59.371: INFO: Got endpoints: latency-svc-4vnvz [751.468657ms] Apr 29 21:59:59.377: INFO: Created: latency-svc-kqhxt Apr 29 21:59:59.420: INFO: Got endpoints: latency-svc-zfxgv [749.734644ms] Apr 29 21:59:59.426: INFO: Created: latency-svc-fhnqn Apr 29 21:59:59.470: INFO: Got endpoints: latency-svc-8kzqz [748.906505ms] Apr 29 21:59:59.476: INFO: Created: latency-svc-h6jdq Apr 29 21:59:59.520: INFO: Got endpoints: latency-svc-gwkqp [749.721784ms] Apr 29 21:59:59.525: INFO: Created: latency-svc-wghcw Apr 29 21:59:59.570: INFO: Got endpoints: latency-svc-z7vpb [750.182833ms] Apr 29 21:59:59.577: INFO: Created: latency-svc-kxjdk Apr 29 21:59:59.620: INFO: Got endpoints: latency-svc-prt9g [750.561282ms] Apr 29 21:59:59.625: INFO: Created: latency-svc-pnkff Apr 29 21:59:59.671: INFO: Got endpoints: latency-svc-5z7hp [750.244213ms] Apr 29 21:59:59.677: INFO: Created: latency-svc-gnw74 Apr 29 21:59:59.720: INFO: Got endpoints: latency-svc-gmptn [749.846429ms] Apr 29 21:59:59.726: INFO: Created: latency-svc-5ptnp Apr 29 21:59:59.770: INFO: Got endpoints: latency-svc-c8z8s [749.048646ms] Apr 29 21:59:59.775: INFO: Created: latency-svc-hqncc Apr 29 21:59:59.821: INFO: Got endpoints: latency-svc-hgvhp [749.768797ms] Apr 29 21:59:59.826: INFO: Created: latency-svc-j7hg7 Apr 29 21:59:59.870: INFO: Got endpoints: latency-svc-8lg5w [750.427031ms] Apr 29 21:59:59.877: INFO: Created: latency-svc-jxg5p Apr 29 21:59:59.921: INFO: Got endpoints: latency-svc-qdhg9 [751.355819ms] Apr 29 21:59:59.926: INFO: Created: latency-svc-pqvgs Apr 29 21:59:59.970: INFO: Got endpoints: latency-svc-nj945 [749.903662ms] Apr 29 21:59:59.976: INFO: Created: latency-svc-fbt5d Apr 29 22:00:00.019: INFO: Got endpoints: latency-svc-xqdhf [748.868052ms] Apr 29 22:00:00.025: INFO: Created: latency-svc-8p8mn Apr 29 22:00:00.070: INFO: Got endpoints: latency-svc-m7tkp [749.691123ms] Apr 29 22:00:00.075: INFO: Created: latency-svc-x9hhb Apr 29 22:00:00.121: INFO: Got endpoints: latency-svc-kqhxt [749.771221ms] Apr 29 22:00:00.127: INFO: Created: latency-svc-fl8hx Apr 29 22:00:00.170: INFO: Got endpoints: latency-svc-fhnqn [750.336463ms] Apr 29 22:00:00.177: INFO: Created: latency-svc-grvt7 Apr 29 22:00:00.220: INFO: Got endpoints: latency-svc-h6jdq [749.525354ms] Apr 29 22:00:00.225: INFO: Created: latency-svc-sj7zz Apr 29 22:00:00.270: INFO: Got endpoints: latency-svc-wghcw [749.702838ms] Apr 29 22:00:00.275: INFO: Created: latency-svc-4cfvw Apr 29 22:00:00.321: INFO: Got endpoints: latency-svc-kxjdk [750.989295ms] Apr 29 22:00:00.327: INFO: Created: latency-svc-p8vsn Apr 29 22:00:00.371: INFO: Got endpoints: latency-svc-pnkff [750.505247ms] Apr 29 22:00:00.377: INFO: Created: latency-svc-tpzft Apr 29 22:00:00.419: INFO: Got endpoints: latency-svc-gnw74 [748.779787ms] Apr 29 22:00:00.425: INFO: Created: latency-svc-gqlmt Apr 29 22:00:00.470: INFO: Got endpoints: latency-svc-5ptnp [749.36536ms] Apr 29 22:00:00.476: INFO: Created: latency-svc-w77b2 Apr 29 22:00:00.520: INFO: Got endpoints: latency-svc-hqncc [750.020462ms] Apr 29 22:00:00.526: INFO: Created: latency-svc-mmsp7 Apr 29 22:00:00.570: INFO: Got endpoints: latency-svc-j7hg7 [748.743122ms] Apr 29 22:00:00.575: INFO: Created: latency-svc-54q86 Apr 29 22:00:00.620: INFO: Got endpoints: latency-svc-jxg5p [750.343902ms] Apr 29 22:00:00.626: INFO: Created: latency-svc-99ws2 Apr 29 22:00:00.670: INFO: Got endpoints: latency-svc-pqvgs [748.878864ms] Apr 29 22:00:00.675: INFO: Created: latency-svc-hs5cw Apr 29 22:00:00.720: INFO: Got endpoints: latency-svc-fbt5d [750.009153ms] Apr 29 22:00:00.726: INFO: Created: latency-svc-xhf57 Apr 29 22:00:00.770: INFO: Got endpoints: latency-svc-8p8mn [750.74729ms] Apr 29 22:00:00.776: INFO: Created: latency-svc-65ttp Apr 29 22:00:00.820: INFO: Got endpoints: latency-svc-x9hhb [749.882655ms] Apr 29 22:00:00.827: INFO: Created: latency-svc-vfd55 Apr 29 22:00:00.870: INFO: Got endpoints: latency-svc-fl8hx [748.797142ms] Apr 29 22:00:00.877: INFO: Created: latency-svc-n5bsp Apr 29 22:00:00.920: INFO: Got endpoints: latency-svc-grvt7 [749.63614ms] Apr 29 22:00:00.926: INFO: Created: latency-svc-dk8qs Apr 29 22:00:00.970: INFO: Got endpoints: latency-svc-sj7zz [750.096198ms] Apr 29 22:00:00.975: INFO: Created: latency-svc-nzbk8 Apr 29 22:00:01.020: INFO: Got endpoints: latency-svc-4cfvw [749.756086ms] Apr 29 22:00:01.025: INFO: Created: latency-svc-4mlvt Apr 29 22:00:01.071: INFO: Got endpoints: latency-svc-p8vsn [749.416117ms] Apr 29 22:00:01.077: INFO: Created: latency-svc-7vrlg Apr 29 22:00:01.120: INFO: Got endpoints: latency-svc-tpzft [748.882114ms] Apr 29 22:00:01.125: INFO: Created: latency-svc-h6qld Apr 29 22:00:01.220: INFO: Got endpoints: latency-svc-gqlmt [800.918714ms] Apr 29 22:00:01.227: INFO: Created: latency-svc-2c9hg Apr 29 22:00:01.271: INFO: Got endpoints: latency-svc-w77b2 [801.192862ms] Apr 29 22:00:01.276: INFO: Created: latency-svc-5zsgx Apr 29 22:00:01.320: INFO: Got endpoints: latency-svc-mmsp7 [799.466351ms] Apr 29 22:00:01.325: INFO: Created: latency-svc-k4vcx Apr 29 22:00:01.371: INFO: Got endpoints: latency-svc-54q86 [800.892275ms] Apr 29 22:00:01.376: INFO: Created: latency-svc-zk7v5 Apr 29 22:00:01.420: INFO: Got endpoints: latency-svc-99ws2 [800.018017ms] Apr 29 22:00:01.426: INFO: Created: latency-svc-hzbvz Apr 29 22:00:01.469: INFO: Got endpoints: latency-svc-hs5cw [799.666475ms] Apr 29 22:00:01.475: INFO: Created: latency-svc-6gsr8 Apr 29 22:00:01.521: INFO: Got endpoints: latency-svc-xhf57 [800.063282ms] Apr 29 22:00:01.526: INFO: Created: latency-svc-sq8z8 Apr 29 22:00:01.570: INFO: Got endpoints: latency-svc-65ttp [799.823452ms] Apr 29 22:00:01.577: INFO: Created: latency-svc-d5sv5 Apr 29 22:00:01.620: INFO: Got endpoints: latency-svc-vfd55 [799.555593ms] Apr 29 22:00:01.625: INFO: Created: latency-svc-q5drz Apr 29 22:00:01.670: INFO: Got endpoints: latency-svc-n5bsp [799.952722ms] Apr 29 22:00:01.675: INFO: Created: latency-svc-8xzqb Apr 29 22:00:01.721: INFO: Got endpoints: latency-svc-dk8qs [800.818004ms] Apr 29 22:00:01.727: INFO: Created: latency-svc-xznsb Apr 29 22:00:01.770: INFO: Got endpoints: latency-svc-nzbk8 [800.157653ms] Apr 29 22:00:01.775: INFO: Created: latency-svc-b9prn Apr 29 22:00:01.821: INFO: Got endpoints: latency-svc-4mlvt [801.622155ms] Apr 29 22:00:01.827: INFO: Created: latency-svc-czqlv Apr 29 22:00:01.871: INFO: Got endpoints: latency-svc-7vrlg [799.908677ms] Apr 29 22:00:01.879: INFO: Created: latency-svc-6kfdd Apr 29 22:00:01.920: INFO: Got endpoints: latency-svc-h6qld [800.170854ms] Apr 29 22:00:01.925: INFO: Created: latency-svc-jnc6k Apr 29 22:00:01.985: INFO: Got endpoints: latency-svc-2c9hg [764.738227ms] Apr 29 22:00:01.991: INFO: Created: latency-svc-lpkw7 Apr 29 22:00:02.020: INFO: Got endpoints: latency-svc-5zsgx [749.496327ms] Apr 29 22:00:02.026: INFO: Created: latency-svc-422s6 Apr 29 22:00:02.070: INFO: Got endpoints: latency-svc-k4vcx [749.973279ms] Apr 29 22:00:02.075: INFO: Created: latency-svc-njkqq Apr 29 22:00:02.120: INFO: Got endpoints: latency-svc-zk7v5 [749.684233ms] Apr 29 22:00:02.127: INFO: Created: latency-svc-l9fpf Apr 29 22:00:02.170: INFO: Got endpoints: latency-svc-hzbvz [749.476809ms] Apr 29 22:00:02.176: INFO: Created: latency-svc-rcwtj Apr 29 22:00:02.219: INFO: Got endpoints: latency-svc-6gsr8 [749.645573ms] Apr 29 22:00:02.225: INFO: Created: latency-svc-zgfck Apr 29 22:00:02.270: INFO: Got endpoints: latency-svc-sq8z8 [749.551451ms] Apr 29 22:00:02.276: INFO: Created: latency-svc-bk246 Apr 29 22:00:02.321: INFO: Got endpoints: latency-svc-d5sv5 [751.007852ms] Apr 29 22:00:02.326: INFO: Created: latency-svc-blrvp Apr 29 22:00:02.369: INFO: Got endpoints: latency-svc-q5drz [749.776654ms] Apr 29 22:00:02.375: INFO: Created: latency-svc-fd9lb Apr 29 22:00:02.421: INFO: Got endpoints: latency-svc-8xzqb [750.912872ms] Apr 29 22:00:02.426: INFO: Created: latency-svc-5xsk8 Apr 29 22:00:02.470: INFO: Got endpoints: latency-svc-xznsb [749.582595ms] Apr 29 22:00:02.476: INFO: Created: latency-svc-d4qlm Apr 29 22:00:02.520: INFO: Got endpoints: latency-svc-b9prn [749.703374ms] Apr 29 22:00:02.525: INFO: Created: latency-svc-kzmhk Apr 29 22:00:02.570: INFO: Got endpoints: latency-svc-czqlv [748.801673ms] Apr 29 22:00:02.576: INFO: Created: latency-svc-dhpxz Apr 29 22:00:02.624: INFO: Got endpoints: latency-svc-6kfdd [753.412091ms] Apr 29 22:00:02.632: INFO: Created: latency-svc-f59h5 Apr 29 22:00:02.671: INFO: Got endpoints: latency-svc-jnc6k [750.635513ms] Apr 29 22:00:02.675: INFO: Created: latency-svc-qmtbf Apr 29 22:00:02.720: INFO: Got endpoints: latency-svc-lpkw7 [735.315254ms] Apr 29 22:00:02.726: INFO: Created: latency-svc-s5zlm Apr 29 22:00:02.770: INFO: Got endpoints: latency-svc-422s6 [749.876278ms] Apr 29 22:00:02.777: INFO: Created: latency-svc-jnvtw Apr 29 22:00:02.820: INFO: Got endpoints: latency-svc-njkqq [750.255341ms] Apr 29 22:00:02.826: INFO: Created: latency-svc-7dgvj Apr 29 22:00:02.870: INFO: Got endpoints: latency-svc-l9fpf [749.377967ms] Apr 29 22:00:02.875: INFO: Created: latency-svc-jqcqv Apr 29 22:00:02.921: INFO: Got endpoints: latency-svc-rcwtj [750.795399ms] Apr 29 22:00:02.926: INFO: Created: latency-svc-cdlp6 Apr 29 22:00:02.970: INFO: Got endpoints: latency-svc-zgfck [751.095349ms] Apr 29 22:00:02.975: INFO: Created: latency-svc-ht97n Apr 29 22:00:03.021: INFO: Got endpoints: latency-svc-bk246 [750.64184ms] Apr 29 22:00:03.027: INFO: Created: latency-svc-p565g Apr 29 22:00:03.071: INFO: Got endpoints: latency-svc-blrvp [749.913628ms] Apr 29 22:00:03.077: INFO: Created: latency-svc-vmq5m Apr 29 22:00:03.120: INFO: Got endpoints: latency-svc-fd9lb [750.216859ms] Apr 29 22:00:03.125: INFO: Created: latency-svc-bjdrv Apr 29 22:00:03.170: INFO: Got endpoints: latency-svc-5xsk8 [748.950506ms] Apr 29 22:00:03.175: INFO: Created: latency-svc-tpbqd Apr 29 22:00:03.220: INFO: Got endpoints: latency-svc-d4qlm [750.072825ms] Apr 29 22:00:03.228: INFO: Created: latency-svc-jzf55 Apr 29 22:00:03.270: INFO: Got endpoints: latency-svc-kzmhk [750.06524ms] Apr 29 22:00:03.275: INFO: Created: latency-svc-77kcv Apr 29 22:00:03.320: INFO: Got endpoints: latency-svc-dhpxz [749.773819ms] Apr 29 22:00:03.326: INFO: Created: latency-svc-kfmm9 Apr 29 22:00:03.370: INFO: Got endpoints: latency-svc-f59h5 [746.012464ms] Apr 29 22:00:03.377: INFO: Created: latency-svc-2xsps Apr 29 22:00:03.420: INFO: Got endpoints: latency-svc-qmtbf [749.409352ms] Apr 29 22:00:03.427: INFO: Created: latency-svc-kltgz Apr 29 22:00:03.471: INFO: Got endpoints: latency-svc-s5zlm [750.168236ms] Apr 29 22:00:03.476: INFO: Created: latency-svc-xhgjr Apr 29 22:00:03.520: INFO: Got endpoints: latency-svc-jnvtw [749.592329ms] Apr 29 22:00:03.525: INFO: Created: latency-svc-pxzv2 Apr 29 22:00:03.571: INFO: Got endpoints: latency-svc-7dgvj [750.414646ms] Apr 29 22:00:03.576: INFO: Created: latency-svc-xzv2z Apr 29 22:00:03.621: INFO: Got endpoints: latency-svc-jqcqv [751.224927ms] Apr 29 22:00:03.626: INFO: Created: latency-svc-pxhhv Apr 29 22:00:03.671: INFO: Got endpoints: latency-svc-cdlp6 [750.775906ms] Apr 29 22:00:03.677: INFO: Created: latency-svc-z2bd5 Apr 29 22:00:03.720: INFO: Got endpoints: latency-svc-ht97n [750.197866ms] Apr 29 22:00:03.726: INFO: Created: latency-svc-scnqk Apr 29 22:00:03.770: INFO: Got endpoints: latency-svc-p565g [748.744033ms] Apr 29 22:00:03.775: INFO: Created: latency-svc-9dpsp Apr 29 22:00:03.820: INFO: Got endpoints: latency-svc-vmq5m [749.37069ms] Apr 29 22:00:03.827: INFO: Created: latency-svc-n8rhv Apr 29 22:00:03.871: INFO: Got endpoints: latency-svc-bjdrv [751.072199ms] Apr 29 22:00:03.877: INFO: Created: latency-svc-pw84l Apr 29 22:00:03.921: INFO: Got endpoints: latency-svc-tpbqd [751.008391ms] Apr 29 22:00:03.927: INFO: Created: latency-svc-pwvs2 Apr 29 22:00:03.970: INFO: Got endpoints: latency-svc-jzf55 [749.225239ms] Apr 29 22:00:03.975: INFO: Created: latency-svc-kbsxj Apr 29 22:00:04.020: INFO: Got endpoints: latency-svc-77kcv [750.406609ms] Apr 29 22:00:04.025: INFO: Created: latency-svc-mm6m6 Apr 29 22:00:04.069: INFO: Got endpoints: latency-svc-kfmm9 [749.115289ms] Apr 29 22:00:04.075: INFO: Created: latency-svc-m7cns Apr 29 22:00:04.121: INFO: Got endpoints: latency-svc-2xsps [750.556591ms] Apr 29 22:00:04.128: INFO: Created: latency-svc-qfxjn Apr 29 22:00:04.171: INFO: Got endpoints: latency-svc-kltgz [750.418684ms] Apr 29 22:00:04.176: INFO: Created: latency-svc-zbtrd Apr 29 22:00:04.220: INFO: Got endpoints: latency-svc-xhgjr [749.404061ms] Apr 29 22:00:04.226: INFO: Created: latency-svc-8bdxj Apr 29 22:00:04.270: INFO: Got endpoints: latency-svc-pxzv2 [750.441788ms] Apr 29 22:00:04.277: INFO: Created: latency-svc-r6k7n Apr 29 22:00:04.320: INFO: Got endpoints: latency-svc-xzv2z [749.277085ms] Apr 29 22:00:04.325: INFO: Created: latency-svc-nf4vn Apr 29 22:00:04.370: INFO: Got endpoints: latency-svc-pxhhv [748.853545ms] Apr 29 22:00:04.377: INFO: Created: latency-svc-d8z4l Apr 29 22:00:04.420: INFO: Got endpoints: latency-svc-z2bd5 [748.651916ms] Apr 29 22:00:04.426: INFO: Created: latency-svc-mx7lf Apr 29 22:00:04.471: INFO: Got endpoints: latency-svc-scnqk [750.13415ms] Apr 29 22:00:04.476: INFO: Created: latency-svc-gc7d6 Apr 29 22:00:04.520: INFO: Got endpoints: latency-svc-9dpsp [749.796591ms] Apr 29 22:00:04.525: INFO: Created: latency-svc-ddvqg Apr 29 22:00:04.570: INFO: Got endpoints: latency-svc-n8rhv [749.947293ms] Apr 29 22:00:04.576: INFO: Created: latency-svc-mt4jj Apr 29 22:00:04.620: INFO: Got endpoints: latency-svc-pw84l [748.746176ms] Apr 29 22:00:04.625: INFO: Created: latency-svc-5z4db Apr 29 22:00:04.670: INFO: Got endpoints: latency-svc-pwvs2 [749.382525ms] Apr 29 22:00:04.676: INFO: Created: latency-svc-xmpjm Apr 29 22:00:04.720: INFO: Got endpoints: latency-svc-kbsxj [750.636522ms] Apr 29 22:00:04.727: INFO: Created: latency-svc-qj8l9 Apr 29 22:00:04.770: INFO: Got endpoints: latency-svc-mm6m6 [749.290715ms] Apr 29 22:00:04.775: INFO: Created: latency-svc-hrgxk Apr 29 22:00:04.820: INFO: Got endpoints: latency-svc-m7cns [750.249605ms] Apr 29 22:00:04.825: INFO: Created: latency-svc-8xkcz Apr 29 22:00:04.870: INFO: Got endpoints: latency-svc-qfxjn [748.730605ms] Apr 29 22:00:04.878: INFO: Created: latency-svc-l6dh5 Apr 29 22:00:04.921: INFO: Got endpoints: latency-svc-zbtrd [750.262666ms] Apr 29 22:00:04.926: INFO: Created: latency-svc-rzn57 Apr 29 22:00:04.970: INFO: Got endpoints: latency-svc-8bdxj [750.251114ms] Apr 29 22:00:04.976: INFO: Created: latency-svc-9rp2n Apr 29 22:00:05.021: INFO: Got endpoints: latency-svc-r6k7n [750.707017ms] Apr 29 22:00:05.027: INFO: Created: latency-svc-dx9qj Apr 29 22:00:05.070: INFO: Got endpoints: latency-svc-nf4vn [749.834137ms] Apr 29 22:00:05.076: INFO: Created: latency-svc-nzhjp Apr 29 22:00:05.120: INFO: Got endpoints: latency-svc-d8z4l [749.620159ms] Apr 29 22:00:05.124: INFO: Created: latency-svc-648p9 Apr 29 22:00:05.170: INFO: Got endpoints: latency-svc-mx7lf [750.221477ms] Apr 29 22:00:05.177: INFO: Created: latency-svc-b47wg Apr 29 22:00:05.220: INFO: Got endpoints: latency-svc-gc7d6 [749.63608ms] Apr 29 22:00:05.226: INFO: Created: latency-svc-hn89r Apr 29 22:00:05.269: INFO: Got endpoints: latency-svc-ddvqg [749.850593ms] Apr 29 22:00:05.274: INFO: Created: latency-svc-n8rfk Apr 29 22:00:05.321: INFO: Got endpoints: latency-svc-mt4jj [750.223836ms] Apr 29 22:00:05.327: INFO: Created: latency-svc-vxr8b Apr 29 22:00:05.371: INFO: Got endpoints: latency-svc-5z4db [751.149682ms] Apr 29 22:00:05.376: INFO: Created: latency-svc-q8c9m Apr 29 22:00:05.420: INFO: Got endpoints: latency-svc-xmpjm [749.561941ms] Apr 29 22:00:05.425: INFO: Created: latency-svc-tmrp5 Apr 29 22:00:05.470: INFO: Got endpoints: latency-svc-qj8l9 [749.727871ms] Apr 29 22:00:05.476: INFO: Created: latency-svc-zqd22 Apr 29 22:00:05.520: INFO: Got endpoints: latency-svc-hrgxk [750.309213ms] Apr 29 22:00:05.526: INFO: Created: latency-svc-rk95r Apr 29 22:00:05.570: INFO: Got endpoints: latency-svc-8xkcz [750.410536ms] Apr 29 22:00:05.575: INFO: Created: latency-svc-8g5g9 Apr 29 22:00:05.620: INFO: Got endpoints: latency-svc-l6dh5 [750.204148ms] Apr 29 22:00:05.626: INFO: Created: latency-svc-2mjm2 Apr 29 22:00:05.671: INFO: Got endpoints: latency-svc-rzn57 [750.381561ms] Apr 29 22:00:05.677: INFO: Created: latency-svc-vdx56 Apr 29 22:00:05.720: INFO: Got endpoints: latency-svc-9rp2n [749.733664ms] Apr 29 22:00:05.725: INFO: Created: latency-svc-qgv8l Apr 29 22:00:05.771: INFO: Got endpoints: latency-svc-dx9qj [750.049794ms] Apr 29 22:00:05.777: INFO: Created: latency-svc-zz5v4 Apr 29 22:00:05.821: INFO: Got endpoints: latency-svc-nzhjp [750.549039ms] Apr 29 22:00:05.826: INFO: Created: latency-svc-tlsnb Apr 29 22:00:05.870: INFO: Got endpoints: latency-svc-648p9 [750.75558ms] Apr 29 22:00:05.877: INFO: Created: latency-svc-bxvct Apr 29 22:00:05.921: INFO: Got endpoints: latency-svc-b47wg [750.338303ms] Apr 29 22:00:05.926: INFO: Created: latency-svc-bq5t7 Apr 29 22:00:05.969: INFO: Got endpoints: latency-svc-hn89r [749.034155ms] Apr 29 22:00:05.975: INFO: Created: latency-svc-vbkvp Apr 29 22:00:06.021: INFO: Got endpoints: latency-svc-n8rfk [751.140582ms] Apr 29 22:00:06.026: INFO: Created: latency-svc-g999z Apr 29 22:00:06.070: INFO: Got endpoints: latency-svc-vxr8b [749.546967ms] Apr 29 22:00:06.076: INFO: Created: latency-svc-hb6jc Apr 29 22:00:06.120: INFO: Got endpoints: latency-svc-q8c9m [749.34446ms] Apr 29 22:00:06.126: INFO: Created: latency-svc-ntffd Apr 29 22:00:06.171: INFO: Got endpoints: latency-svc-tmrp5 [750.961985ms] Apr 29 22:00:06.176: INFO: Created: latency-svc-cf2vf Apr 29 22:00:06.219: INFO: Got endpoints: latency-svc-zqd22 [749.164899ms] Apr 29 22:00:06.225: INFO: Created: latency-svc-lwg7h Apr 29 22:00:06.271: INFO: Got endpoints: latency-svc-rk95r [751.083395ms] Apr 29 22:00:06.277: INFO: Created: latency-svc-78fhk Apr 29 22:00:06.321: INFO: Got endpoints: latency-svc-8g5g9 [750.495946ms] Apr 29 22:00:06.326: INFO: Created: latency-svc-wwz4l Apr 29 22:00:06.370: INFO: Got endpoints: latency-svc-2mjm2 [749.929418ms] Apr 29 22:00:06.376: INFO: Created: latency-svc-9zbcx Apr 29 22:00:06.420: INFO: Got endpoints: latency-svc-vdx56 [748.679724ms] Apr 29 22:00:06.471: INFO: Got endpoints: latency-svc-qgv8l [750.259235ms] Apr 29 22:00:06.520: INFO: Got endpoints: latency-svc-zz5v4 [748.908098ms] Apr 29 22:00:06.571: INFO: Got endpoints: latency-svc-tlsnb [750.056729ms] Apr 29 22:00:06.620: INFO: Got endpoints: latency-svc-bxvct [749.421961ms] Apr 29 22:00:06.670: INFO: Got endpoints: latency-svc-bq5t7 [749.522248ms] Apr 29 22:00:06.720: INFO: Got endpoints: latency-svc-vbkvp [750.778261ms] Apr 29 22:00:06.771: INFO: Got endpoints: latency-svc-g999z [750.168714ms] Apr 29 22:00:06.821: INFO: Got endpoints: latency-svc-hb6jc [751.102129ms] Apr 29 22:00:06.870: INFO: Got endpoints: latency-svc-ntffd [749.902337ms] Apr 29 22:00:06.921: INFO: Got endpoints: latency-svc-cf2vf [749.979967ms] Apr 29 22:00:06.970: INFO: Got endpoints: latency-svc-lwg7h [750.464075ms] Apr 29 22:00:07.021: INFO: Got endpoints: latency-svc-78fhk [749.435331ms] Apr 29 22:00:07.070: INFO: Got endpoints: latency-svc-wwz4l [749.438301ms] Apr 29 22:00:07.120: INFO: Got endpoints: latency-svc-9zbcx [750.483031ms] Apr 29 22:00:07.120: INFO: Latencies: [7.431846ms 10.892284ms 13.331041ms 16.040744ms 18.227798ms 20.675591ms 23.758753ms 26.147123ms 29.881746ms 30.671353ms 34.838828ms 37.180598ms 38.786603ms 39.336173ms 39.578853ms 39.649464ms 39.922169ms 40.182032ms 40.490026ms 40.631461ms 40.725834ms 41.027078ms 41.121549ms 41.375014ms 41.525439ms 41.90506ms 42.394332ms 42.880922ms 44.220956ms 45.310996ms 49.440934ms 97.147058ms 145.645322ms 191.762132ms 238.43748ms 285.611451ms 333.110146ms 380.26312ms 428.952086ms 475.437875ms 520.045797ms 568.214608ms 616.288924ms 663.120802ms 707.602285ms 735.315254ms 746.012464ms 748.651916ms 748.679724ms 748.730605ms 748.743122ms 748.744033ms 748.746176ms 748.779787ms 748.797142ms 748.801673ms 748.853545ms 748.868052ms 748.878864ms 748.882114ms 748.906505ms 748.908098ms 748.950506ms 749.034155ms 749.048646ms 749.115289ms 749.164899ms 749.225239ms 749.277085ms 749.290715ms 749.34446ms 749.36536ms 749.37069ms 749.377967ms 749.382525ms 749.404061ms 749.409352ms 749.416117ms 749.421961ms 749.435331ms 749.438301ms 749.476809ms 749.496327ms 749.522248ms 749.525354ms 749.546967ms 749.551451ms 749.561941ms 749.582595ms 749.592329ms 749.620159ms 749.63608ms 749.63614ms 749.645573ms 749.684233ms 749.691123ms 749.702838ms 749.703374ms 749.721784ms 749.727871ms 749.733664ms 749.734644ms 749.756086ms 749.768797ms 749.771221ms 749.773819ms 749.776654ms 749.796591ms 749.834137ms 749.846429ms 749.850593ms 749.876278ms 749.882655ms 749.902337ms 749.903662ms 749.913628ms 749.929418ms 749.947293ms 749.973279ms 749.979967ms 750.009153ms 750.020462ms 750.049794ms 750.056729ms 750.06524ms 750.072825ms 750.096198ms 750.13415ms 750.168236ms 750.168714ms 750.182833ms 750.197866ms 750.204148ms 750.216859ms 750.221477ms 750.223836ms 750.244213ms 750.249605ms 750.251114ms 750.255341ms 750.259235ms 750.262666ms 750.309213ms 750.336463ms 750.338303ms 750.343902ms 750.381561ms 750.406609ms 750.410536ms 750.414646ms 750.418684ms 750.427031ms 750.441788ms 750.464075ms 750.483031ms 750.495946ms 750.505247ms 750.549039ms 750.556591ms 750.561282ms 750.635513ms 750.636522ms 750.64184ms 750.707017ms 750.74729ms 750.75558ms 750.775906ms 750.778261ms 750.795399ms 750.912872ms 750.961985ms 750.989295ms 751.007852ms 751.008391ms 751.072199ms 751.083395ms 751.095349ms 751.102129ms 751.140582ms 751.149682ms 751.224927ms 751.355819ms 751.468657ms 753.412091ms 764.738227ms 799.466351ms 799.555593ms 799.666475ms 799.823452ms 799.908677ms 799.952722ms 800.018017ms 800.063282ms 800.157653ms 800.170854ms 800.818004ms 800.892275ms 800.918714ms 801.192862ms 801.622155ms] Apr 29 22:00:07.121: INFO: 50 %ile: 749.733664ms Apr 29 22:00:07.121: INFO: 90 %ile: 751.224927ms Apr 29 22:00:07.121: INFO: 99 %ile: 801.192862ms Apr 29 22:00:07.121: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:07.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3703" for this suite. • [SLOW TEST:11.817 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":6,"skipped":171,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:02.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:00:02.969: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 29 22:00:02.975: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 29 22:00:07.979: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 29 22:00:07.979: INFO: Creating deployment "test-rolling-update-deployment" Apr 29 22:00:07.982: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 29 22:00:07.987: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 29 22:00:09.993: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 29 22:00:09.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866407, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866407, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866408, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866407, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:00:12.003: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Apr 29 22:00:12.012: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9875 efc83a0b-9e2a-412e-988b-61ec1138bd17 36357 1 2022-04-29 22:00:07 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2022-04-29 22:00:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-29 22:00:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00005ae38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-29 22:00:07 +0000 UTC,LastTransitionTime:2022-04-29 22:00:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2022-04-29 22:00:11 +0000 UTC,LastTransitionTime:2022-04-29 22:00:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 29 22:00:12.015: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-9875 9d1953fd-b648-4911-9701-fa10a8a3466b 36346 1 2022-04-29 22:00:07 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment efc83a0b-9e2a-412e-988b-61ec1138bd17 0xc005635e97 0xc005635e98}] [] [{kube-controller-manager Update apps/v1 2022-04-29 22:00:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"efc83a0b-9e2a-412e-988b-61ec1138bd17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005635f48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 29 22:00:12.015: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 29 22:00:12.015: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9875 51f302f8-322f-4566-8d2b-0e93e3a282f3 36356 2 2022-04-29 22:00:02 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment efc83a0b-9e2a-412e-988b-61ec1138bd17 0xc005635d27 0xc005635d28}] [] [{e2e.test Update apps/v1 2022-04-29 22:00:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-29 22:00:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"efc83a0b-9e2a-412e-988b-61ec1138bd17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005635df8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 29 22:00:12.019: INFO: Pod "test-rolling-update-deployment-585b757574-jq6g2" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-jq6g2 test-rolling-update-deployment-585b757574- deployment-9875 b9d717ec-d735-42b6-b574-20a21c53fa7d 36345 0 2022-04-29 22:00:07 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.57" ], "mac": "9e:2c:d1:d8:c8:c0", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.57" ], "mac": "9e:2c:d1:d8:c8:c0", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 9d1953fd-b648-4911-9701-fa10a8a3466b 0xc002b7fd3f 0xc002b7fed0}] [] [{kube-controller-manager Update v1 2022-04-29 22:00:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9d1953fd-b648-4911-9701-fa10a8a3466b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-29 22:00:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-29 22:00:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.57\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cwvkg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwvkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:00:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:00:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:00:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:00:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.57,StartTime:2022-04-29 22:00:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 22:00:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://ff5bfdd57456f69aa9c052f991a63901b623ed6790bbec18bd1ca37ec3bdb46e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:12.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9875" for this suite. • [SLOW TEST:9.076 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":16,"skipped":315,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:42.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:14.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5822" for this suite. • [SLOW TEST:31.244 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":244,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:14.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Apr 29 22:00:14.747: INFO: starting watch STEP: patching STEP: updating Apr 29 22:00:14.754: INFO: waiting for watch events with expected annotations Apr 29 22:00:14.754: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:14.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-2509" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":14,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:59.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:16.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2487" for this suite. • [SLOW TEST:16.103 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":17,"skipped":399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:48.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:16.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8173" for this suite. • [SLOW TEST:28.060 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":12,"skipped":132,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:14.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Apr 29 22:00:14.921: INFO: Waiting up to 5m0s for pod "downward-api-96a17ce4-b86a-4c8b-a1ef-89d5840f411b" in namespace "downward-api-4839" to be "Succeeded or Failed" Apr 29 22:00:14.923: INFO: Pod "downward-api-96a17ce4-b86a-4c8b-a1ef-89d5840f411b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.901383ms Apr 29 22:00:16.929: INFO: Pod "downward-api-96a17ce4-b86a-4c8b-a1ef-89d5840f411b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007250711s Apr 29 22:00:18.937: INFO: Pod "downward-api-96a17ce4-b86a-4c8b-a1ef-89d5840f411b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01507416s STEP: Saw pod success Apr 29 22:00:18.937: INFO: Pod "downward-api-96a17ce4-b86a-4c8b-a1ef-89d5840f411b" satisfied condition "Succeeded or Failed" Apr 29 22:00:18.939: INFO: Trying to get logs from node node2 pod downward-api-96a17ce4-b86a-4c8b-a1ef-89d5840f411b container dapi-container: STEP: delete the pod Apr 29 22:00:18.956: INFO: Waiting for pod downward-api-96a17ce4-b86a-4c8b-a1ef-89d5840f411b to disappear Apr 29 22:00:18.958: INFO: Pod downward-api-96a17ce4-b86a-4c8b-a1ef-89d5840f411b no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:18.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4839" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":289,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:16.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 29 22:00:16.208: INFO: Waiting up to 5m0s for pod "pod-d0d96ff3-e5a9-4ac9-9055-fcdb8db4fc65" in namespace "emptydir-3571" to be "Succeeded or Failed" Apr 29 22:00:16.210: INFO: Pod "pod-d0d96ff3-e5a9-4ac9-9055-fcdb8db4fc65": Phase="Pending", Reason="", readiness=false. Elapsed: 1.867409ms Apr 29 22:00:18.213: INFO: Pod "pod-d0d96ff3-e5a9-4ac9-9055-fcdb8db4fc65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004657045s Apr 29 22:00:20.218: INFO: Pod "pod-d0d96ff3-e5a9-4ac9-9055-fcdb8db4fc65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010213499s STEP: Saw pod success Apr 29 22:00:20.218: INFO: Pod "pod-d0d96ff3-e5a9-4ac9-9055-fcdb8db4fc65" satisfied condition "Succeeded or Failed" Apr 29 22:00:20.220: INFO: Trying to get logs from node node1 pod pod-d0d96ff3-e5a9-4ac9-9055-fcdb8db4fc65 container test-container: STEP: delete the pod Apr 29 22:00:20.234: INFO: Waiting for pod pod-d0d96ff3-e5a9-4ac9-9055-fcdb8db4fc65 to disappear Apr 29 22:00:20.236: INFO: Pod pod-d0d96ff3-e5a9-4ac9-9055-fcdb8db4fc65 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:20.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3571" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":440,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:12.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 21:59:12.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 29 21:59:19.698: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-29T21:59:19Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-29T21:59:19Z]] name:name1 resourceVersion:34428 uid:c650372d-ab1b-4fac-b4b0-ddd1007252a9] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 29 21:59:29.704: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-29T21:59:29Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-29T21:59:29Z]] name:name2 resourceVersion:34679 uid:447c6062-a268-4942-bf97-15532a61549f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 29 21:59:39.712: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-29T21:59:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-29T21:59:39Z]] name:name1 resourceVersion:34952 uid:c650372d-ab1b-4fac-b4b0-ddd1007252a9] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 29 21:59:49.719: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-29T21:59:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-29T21:59:49Z]] name:name2 resourceVersion:35173 uid:447c6062-a268-4942-bf97-15532a61549f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 29 21:59:59.730: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-29T21:59:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-29T21:59:39Z]] name:name1 resourceVersion:35556 uid:c650372d-ab1b-4fac-b4b0-ddd1007252a9] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 29 22:00:09.735: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-29T21:59:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-29T21:59:49Z]] name:name2 resourceVersion:36311 uid:447c6062-a268-4942-bf97-15532a61549f] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:20.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-7953" for this suite. • [SLOW TEST:68.129 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":5,"skipped":88,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:20.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods Apr 29 22:00:20.340: INFO: created test-pod-1 Apr 29 22:00:20.349: INFO: created test-pod-2 Apr 29 22:00:20.358: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:20.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2766" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":6,"skipped":116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:20.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:20.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3657" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":7,"skipped":140,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:16.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 22:00:16.650: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0821e61d-015e-4be1-801f-c94060b46f23" in namespace "projected-6255" to be "Succeeded or Failed" Apr 29 22:00:16.653: INFO: Pod "downwardapi-volume-0821e61d-015e-4be1-801f-c94060b46f23": Phase="Pending", Reason="", readiness=false. Elapsed: 3.124993ms Apr 29 22:00:18.656: INFO: Pod "downwardapi-volume-0821e61d-015e-4be1-801f-c94060b46f23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005785987s Apr 29 22:00:20.659: INFO: Pod "downwardapi-volume-0821e61d-015e-4be1-801f-c94060b46f23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008814037s STEP: Saw pod success Apr 29 22:00:20.659: INFO: Pod "downwardapi-volume-0821e61d-015e-4be1-801f-c94060b46f23" satisfied condition "Succeeded or Failed" Apr 29 22:00:20.662: INFO: Trying to get logs from node node1 pod downwardapi-volume-0821e61d-015e-4be1-801f-c94060b46f23 container client-container: STEP: delete the pod Apr 29 22:00:20.673: INFO: Waiting for pod downwardapi-volume-0821e61d-015e-4be1-801f-c94060b46f23 to disappear Apr 29 22:00:20.675: INFO: Pod downwardapi-volume-0821e61d-015e-4be1-801f-c94060b46f23 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:20.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6255" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:20.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-7540/configmap-test-f763a038-85e0-4205-bed9-dac9ebf5f405 STEP: Creating a pod to test consume configMaps Apr 29 22:00:20.565: INFO: Waiting up to 5m0s for pod "pod-configmaps-edc2f4e0-01fb-4282-a251-cb605fd4aa4a" in namespace "configmap-7540" to be "Succeeded or Failed" Apr 29 22:00:20.567: INFO: Pod "pod-configmaps-edc2f4e0-01fb-4282-a251-cb605fd4aa4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.452612ms Apr 29 22:00:22.570: INFO: Pod "pod-configmaps-edc2f4e0-01fb-4282-a251-cb605fd4aa4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005661048s Apr 29 22:00:24.576: INFO: Pod "pod-configmaps-edc2f4e0-01fb-4282-a251-cb605fd4aa4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011584399s STEP: Saw pod success Apr 29 22:00:24.576: INFO: Pod "pod-configmaps-edc2f4e0-01fb-4282-a251-cb605fd4aa4a" satisfied condition "Succeeded or Failed" Apr 29 22:00:24.578: INFO: Trying to get logs from node node1 pod pod-configmaps-edc2f4e0-01fb-4282-a251-cb605fd4aa4a container env-test: STEP: delete the pod Apr 29 22:00:24.594: INFO: Waiting for pod pod-configmaps-edc2f4e0-01fb-4282-a251-cb605fd4aa4a to disappear Apr 29 22:00:24.596: INFO: Pod pod-configmaps-edc2f4e0-01fb-4282-a251-cb605fd4aa4a no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:24.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7540" for this suite. • ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:07.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Apr 29 22:00:07.190: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:00:09.193: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:00:11.195: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:00:13.193: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Apr 29 22:00:13.207: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:00:15.211: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:00:17.210: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Apr 29 22:00:17.216: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 22:00:17.219: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 22:00:19.223: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 22:00:19.226: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 22:00:21.221: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 22:00:21.225: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 22:00:23.222: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 22:00:23.224: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 22:00:25.221: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 22:00:25.224: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:26.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9551" for this suite. • [SLOW TEST:18.977 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":181,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:26.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:26.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8895" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:20.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-759b2c55-0414-4cb4-9098-aae151d7b788 STEP: Creating secret with name s-test-opt-upd-f53e6ce1-e13d-424e-8570-f8c87986afd6 STEP: Creating the pod Apr 29 22:00:20.831: INFO: The status of Pod pod-projected-secrets-dbdfc526-09b9-4cc7-916b-156c1c09cf62 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:00:22.837: INFO: The status of Pod pod-projected-secrets-dbdfc526-09b9-4cc7-916b-156c1c09cf62 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:00:24.838: INFO: The status of Pod pod-projected-secrets-dbdfc526-09b9-4cc7-916b-156c1c09cf62 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-759b2c55-0414-4cb4-9098-aae151d7b788 STEP: Updating secret s-test-opt-upd-f53e6ce1-e13d-424e-8570-f8c87986afd6 STEP: Creating secret with name s-test-opt-create-4f9ac528-05f8-4e04-8bbd-79eb424fe2d2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:28.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1093" for this suite. • [SLOW TEST:8.134 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":181,"failed":0} SSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":8,"skipped":182,"failed":0} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:26.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-6c9161e0-969c-4128-b53d-db60863499f2 STEP: Creating a pod to test consume configMaps Apr 29 22:00:26.211: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6593efde-debd-4470-8c82-89f580d1d3f9" in namespace "projected-2385" to be "Succeeded or Failed" Apr 29 22:00:26.214: INFO: Pod "pod-projected-configmaps-6593efde-debd-4470-8c82-89f580d1d3f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.755222ms Apr 29 22:00:28.217: INFO: Pod "pod-projected-configmaps-6593efde-debd-4470-8c82-89f580d1d3f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006149079s Apr 29 22:00:30.222: INFO: Pod "pod-projected-configmaps-6593efde-debd-4470-8c82-89f580d1d3f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011296837s STEP: Saw pod success Apr 29 22:00:30.222: INFO: Pod "pod-projected-configmaps-6593efde-debd-4470-8c82-89f580d1d3f9" satisfied condition "Succeeded or Failed" Apr 29 22:00:30.225: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-6593efde-debd-4470-8c82-89f580d1d3f9 container agnhost-container: STEP: delete the pod Apr 29 22:00:30.490: INFO: Waiting for pod pod-projected-configmaps-6593efde-debd-4470-8c82-89f580d1d3f9 to disappear Apr 29 22:00:30.492: INFO: Pod pod-projected-configmaps-6593efde-debd-4470-8c82-89f580d1d3f9 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:30.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2385" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":182,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:28.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-5a7d1681-ebfc-44d1-8932-9c93257f9c52 STEP: Creating a pod to test consume configMaps Apr 29 22:00:28.969: INFO: Waiting up to 5m0s for pod "pod-configmaps-50007a37-5770-4afe-9df9-d07bf0e8d1ae" in namespace "configmap-3774" to be "Succeeded or Failed" Apr 29 22:00:28.971: INFO: Pod "pod-configmaps-50007a37-5770-4afe-9df9-d07bf0e8d1ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21218ms Apr 29 22:00:30.975: INFO: Pod "pod-configmaps-50007a37-5770-4afe-9df9-d07bf0e8d1ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005575229s Apr 29 22:00:32.979: INFO: Pod "pod-configmaps-50007a37-5770-4afe-9df9-d07bf0e8d1ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009823518s STEP: Saw pod success Apr 29 22:00:32.979: INFO: Pod "pod-configmaps-50007a37-5770-4afe-9df9-d07bf0e8d1ae" satisfied condition "Succeeded or Failed" Apr 29 22:00:32.981: INFO: Trying to get logs from node node1 pod pod-configmaps-50007a37-5770-4afe-9df9-d07bf0e8d1ae container agnhost-container: STEP: delete the pod Apr 29 22:00:32.994: INFO: Waiting for pod pod-configmaps-50007a37-5770-4afe-9df9-d07bf0e8d1ae to disappear Apr 29 22:00:32.996: INFO: Pod pod-configmaps-50007a37-5770-4afe-9df9-d07bf0e8d1ae no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:32.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3774" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":191,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:30.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-c185cab0-1862-4c91-9914-4badd708fb11 STEP: Creating a pod to test consume configMaps Apr 29 22:00:30.543: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3b42ecc2-70ad-4164-af88-d859f4986bf6" in namespace "projected-3894" to be "Succeeded or Failed" Apr 29 22:00:30.546: INFO: Pod "pod-projected-configmaps-3b42ecc2-70ad-4164-af88-d859f4986bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.987386ms Apr 29 22:00:32.548: INFO: Pod "pod-projected-configmaps-3b42ecc2-70ad-4164-af88-d859f4986bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005546971s Apr 29 22:00:34.552: INFO: Pod "pod-projected-configmaps-3b42ecc2-70ad-4164-af88-d859f4986bf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009521576s STEP: Saw pod success Apr 29 22:00:34.553: INFO: Pod "pod-projected-configmaps-3b42ecc2-70ad-4164-af88-d859f4986bf6" satisfied condition "Succeeded or Failed" Apr 29 22:00:34.555: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-3b42ecc2-70ad-4164-af88-d859f4986bf6 container agnhost-container: STEP: delete the pod Apr 29 22:00:34.663: INFO: Waiting for pod pod-projected-configmaps-3b42ecc2-70ad-4164-af88-d859f4986bf6 to disappear Apr 29 22:00:34.665: INFO: Pod pod-projected-configmaps-3b42ecc2-70ad-4164-af88-d859f4986bf6 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:34.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3894" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":187,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:12.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4154 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4154 STEP: creating replication controller externalsvc in namespace services-4154 I0429 22:00:12.086099 32 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4154, replica count: 2 I0429 22:00:15.138609 32 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:00:18.140171 32 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 29 22:00:18.157: INFO: Creating new exec pod Apr 29 22:00:22.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4154 exec execpod9rldq -- /bin/sh -x -c nslookup nodeport-service.services-4154.svc.cluster.local' Apr 29 22:00:22.568: INFO: stderr: "+ nslookup nodeport-service.services-4154.svc.cluster.local\n" Apr 29 22:00:22.568: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-4154.svc.cluster.local\tcanonical name = externalsvc.services-4154.svc.cluster.local.\nName:\texternalsvc.services-4154.svc.cluster.local\nAddress: 10.233.5.243\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4154, will wait for the garbage collector to delete the pods Apr 29 22:00:22.625: INFO: Deleting ReplicationController externalsvc took: 3.358824ms Apr 29 22:00:22.726: INFO: Terminating ReplicationController externalsvc pods took: 100.956216ms Apr 29 22:00:35.236: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:35.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4154" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:23.202 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":17,"skipped":322,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:33.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:00:33.064: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Apr 29 22:00:33.078: INFO: The status of Pod pod-logs-websocket-ad504075-2af5-46d6-84fb-542cdb205f31 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:00:35.081: INFO: The status of Pod pod-logs-websocket-ad504075-2af5-46d6-84fb-542cdb205f31 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:00:37.084: INFO: The status of Pod pod-logs-websocket-ad504075-2af5-46d6-84fb-542cdb205f31 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:37.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3311" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":207,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:34.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Apr 29 22:00:34.755: INFO: Waiting up to 5m0s for pod "downward-api-11706315-c3bc-410a-bf84-47a8b64bc242" in namespace "downward-api-4923" to be "Succeeded or Failed" Apr 29 22:00:34.759: INFO: Pod "downward-api-11706315-c3bc-410a-bf84-47a8b64bc242": Phase="Pending", Reason="", readiness=false. Elapsed: 3.7264ms Apr 29 22:00:36.762: INFO: Pod "downward-api-11706315-c3bc-410a-bf84-47a8b64bc242": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007103021s Apr 29 22:00:38.767: INFO: Pod "downward-api-11706315-c3bc-410a-bf84-47a8b64bc242": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011828977s STEP: Saw pod success Apr 29 22:00:38.767: INFO: Pod "downward-api-11706315-c3bc-410a-bf84-47a8b64bc242" satisfied condition "Succeeded or Failed" Apr 29 22:00:38.770: INFO: Trying to get logs from node node1 pod downward-api-11706315-c3bc-410a-bf84-47a8b64bc242 container dapi-container: STEP: delete the pod Apr 29 22:00:39.066: INFO: Waiting for pod downward-api-11706315-c3bc-410a-bf84-47a8b64bc242 to disappear Apr 29 22:00:39.069: INFO: Pod downward-api-11706315-c3bc-410a-bf84-47a8b64bc242 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:39.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4923" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:35.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 22:00:35.292: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f918520-8b68-42f9-b1f3-792f0380d0a0" in namespace "downward-api-863" to be "Succeeded or Failed" Apr 29 22:00:35.295: INFO: Pod "downwardapi-volume-5f918520-8b68-42f9-b1f3-792f0380d0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.617296ms Apr 29 22:00:37.298: INFO: Pod "downwardapi-volume-5f918520-8b68-42f9-b1f3-792f0380d0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005688489s Apr 29 22:00:39.302: INFO: Pod "downwardapi-volume-5f918520-8b68-42f9-b1f3-792f0380d0a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009120446s STEP: Saw pod success Apr 29 22:00:39.302: INFO: Pod "downwardapi-volume-5f918520-8b68-42f9-b1f3-792f0380d0a0" satisfied condition "Succeeded or Failed" Apr 29 22:00:39.304: INFO: Trying to get logs from node node2 pod downwardapi-volume-5f918520-8b68-42f9-b1f3-792f0380d0a0 container client-container: STEP: delete the pod Apr 29 22:00:39.315: INFO: Waiting for pod downwardapi-volume-5f918520-8b68-42f9-b1f3-792f0380d0a0 to disappear Apr 29 22:00:39.317: INFO: Pod downwardapi-volume-5f918520-8b68-42f9-b1f3-792f0380d0a0 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:39.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-863" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":328,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:39.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:39.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-6793" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":19,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:18.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Apr 29 22:00:19.008: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:42.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9194" for this suite. • [SLOW TEST:23.567 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":16,"skipped":297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:39.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: Apr 29 22:00:39.158: INFO: Waiting up to 5m0s for pod "test-pod-c64236db-747b-4001-8ccf-138a001cfc7d" in namespace "svcaccounts-2238" to be "Succeeded or Failed" Apr 29 22:00:39.161: INFO: Pod "test-pod-c64236db-747b-4001-8ccf-138a001cfc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.195242ms Apr 29 22:00:41.165: INFO: Pod "test-pod-c64236db-747b-4001-8ccf-138a001cfc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006733155s Apr 29 22:00:43.169: INFO: Pod "test-pod-c64236db-747b-4001-8ccf-138a001cfc7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010993081s STEP: Saw pod success Apr 29 22:00:43.169: INFO: Pod "test-pod-c64236db-747b-4001-8ccf-138a001cfc7d" satisfied condition "Succeeded or Failed" Apr 29 22:00:43.172: INFO: Trying to get logs from node node2 pod test-pod-c64236db-747b-4001-8ccf-138a001cfc7d container agnhost-container: STEP: delete the pod Apr 29 22:00:43.187: INFO: Waiting for pod test-pod-c64236db-747b-4001-8ccf-138a001cfc7d to disappear Apr 29 22:00:43.189: INFO: Pod test-pod-c64236db-747b-4001-8ccf-138a001cfc7d no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:43.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2238" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":12,"skipped":235,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:39.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 22:00:39.501: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f15a70ed-419e-410a-bece-c85e332ec749" in namespace "downward-api-3139" to be "Succeeded or Failed" Apr 29 22:00:39.503: INFO: Pod "downwardapi-volume-f15a70ed-419e-410a-bece-c85e332ec749": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066289ms Apr 29 22:00:41.507: INFO: Pod "downwardapi-volume-f15a70ed-419e-410a-bece-c85e332ec749": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005725472s Apr 29 22:00:43.512: INFO: Pod "downwardapi-volume-f15a70ed-419e-410a-bece-c85e332ec749": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010586432s STEP: Saw pod success Apr 29 22:00:43.512: INFO: Pod "downwardapi-volume-f15a70ed-419e-410a-bece-c85e332ec749" satisfied condition "Succeeded or Failed" Apr 29 22:00:43.515: INFO: Trying to get logs from node node2 pod downwardapi-volume-f15a70ed-419e-410a-bece-c85e332ec749 container client-container: STEP: delete the pod Apr 29 22:00:43.616: INFO: Waiting for pod downwardapi-volume-f15a70ed-419e-410a-bece-c85e332ec749 to disappear Apr 29 22:00:43.618: INFO: Pod downwardapi-volume-f15a70ed-419e-410a-bece-c85e332ec749 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:43.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3139" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":385,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:43.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created Apr 29 22:00:43.698: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:00:45.701: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:00:47.702: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:00:49.703: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:50.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5776" for this suite. • [SLOW TEST:7.060 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":21,"skipped":403,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:43.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Apr 29 22:00:43.264: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:50.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4833" for this suite. • [SLOW TEST:7.668 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":13,"skipped":257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:37.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-5024 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5024 STEP: Deleting pre-stop pod Apr 29 22:00:54.214: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:54.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5024" for this suite. • [SLOW TEST:17.090 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:42.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7457.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7457.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7457.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7457.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7457.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7457.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7457.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7457.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7457.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7457.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 22:00:50.839: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local from pod dns-7457/dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb: the server could not find the requested resource (get pods dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb) Apr 29 22:00:50.841: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local from pod dns-7457/dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb: the server could not find the requested resource (get pods dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb) Apr 29 22:00:50.845: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7457.svc.cluster.local from pod dns-7457/dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb: the server could not find the requested resource (get pods dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb) Apr 29 22:00:50.847: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7457.svc.cluster.local from pod dns-7457/dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb: the server could not find the requested resource (get pods dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb) Apr 29 22:00:50.856: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local from pod dns-7457/dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb: the server could not find the requested resource (get pods dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb) Apr 29 22:00:50.859: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local from pod dns-7457/dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb: the server could not find the requested resource (get pods dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb) Apr 29 22:00:50.861: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7457.svc.cluster.local from pod dns-7457/dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb: the server could not find the requested resource (get pods dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb) Apr 29 22:00:50.863: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7457.svc.cluster.local from pod dns-7457/dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb: the server could not find the requested resource (get pods dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb) Apr 29 22:00:50.869: INFO: Lookups using dns-7457/dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7457.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7457.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7457.svc.cluster.local jessie_udp@dns-test-service-2.dns-7457.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7457.svc.cluster.local] Apr 29 22:00:55.900: INFO: DNS probes using dns-7457/dns-test-1946ffd5-bae6-4d5c-a71b-2f06cabd52eb succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:00:55.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7457" for this suite. • [SLOW TEST:13.173 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":17,"skipped":395,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:55.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-ab38ed16-15a6-42bf-afed-6fc96ddea68a STEP: Creating a pod to test consume configMaps Apr 29 22:00:55.981: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4edd9cd-749e-46dd-b768-e2daad3b278c" in namespace "configmap-4006" to be "Succeeded or Failed" Apr 29 22:00:55.983: INFO: Pod "pod-configmaps-d4edd9cd-749e-46dd-b768-e2daad3b278c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150779ms Apr 29 22:00:57.986: INFO: Pod "pod-configmaps-d4edd9cd-749e-46dd-b768-e2daad3b278c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005188218s Apr 29 22:00:59.992: INFO: Pod "pod-configmaps-d4edd9cd-749e-46dd-b768-e2daad3b278c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011166481s STEP: Saw pod success Apr 29 22:00:59.992: INFO: Pod "pod-configmaps-d4edd9cd-749e-46dd-b768-e2daad3b278c" satisfied condition "Succeeded or Failed" Apr 29 22:00:59.994: INFO: Trying to get logs from node node1 pod pod-configmaps-d4edd9cd-749e-46dd-b768-e2daad3b278c container agnhost-container: STEP: delete the pod Apr 29 22:01:00.008: INFO: Waiting for pod pod-configmaps-d4edd9cd-749e-46dd-b768-e2daad3b278c to disappear Apr 29 22:01:00.010: INFO: Pod pod-configmaps-d4edd9cd-749e-46dd-b768-e2daad3b278c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:00.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4006" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":403,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:50.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:02.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2334" for this suite. • [SLOW TEST:11.098 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":14,"skipped":298,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:00.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 22:01:00.062: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aeb448c0-bcfb-4a2c-87fc-ffa06e9a0f29" in namespace "projected-1509" to be "Succeeded or Failed" Apr 29 22:01:00.065: INFO: Pod "downwardapi-volume-aeb448c0-bcfb-4a2c-87fc-ffa06e9a0f29": Phase="Pending", Reason="", readiness=false. Elapsed: 3.469518ms Apr 29 22:01:02.070: INFO: Pod "downwardapi-volume-aeb448c0-bcfb-4a2c-87fc-ffa06e9a0f29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008134603s Apr 29 22:01:04.078: INFO: Pod "downwardapi-volume-aeb448c0-bcfb-4a2c-87fc-ffa06e9a0f29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016347517s STEP: Saw pod success Apr 29 22:01:04.078: INFO: Pod "downwardapi-volume-aeb448c0-bcfb-4a2c-87fc-ffa06e9a0f29" satisfied condition "Succeeded or Failed" Apr 29 22:01:04.080: INFO: Trying to get logs from node node1 pod downwardapi-volume-aeb448c0-bcfb-4a2c-87fc-ffa06e9a0f29 container client-container: STEP: delete the pod Apr 29 22:01:04.121: INFO: Waiting for pod downwardapi-volume-aeb448c0-bcfb-4a2c-87fc-ffa06e9a0f29 to disappear Apr 29 22:01:04.124: INFO: Pod downwardapi-volume-aeb448c0-bcfb-4a2c-87fc-ffa06e9a0f29 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:04.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1509" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":407,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:02.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Apr 29 22:01:02.181: INFO: observed Pod pod-test in namespace pods-4387 in phase Pending with labels: map[test-pod-static:true] & conditions [] Apr 29 22:01:02.185: INFO: observed Pod pod-test in namespace pods-4387 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:02 +0000 UTC }] Apr 29 22:01:02.194: INFO: observed Pod pod-test in namespace pods-4387 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:02 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:02 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:02 +0000 UTC }] Apr 29 22:01:03.595: INFO: observed Pod pod-test in namespace pods-4387 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:02 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:02 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:02 +0000 UTC }] Apr 29 22:01:04.599: INFO: Found Pod pod-test in namespace pods-4387 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:02 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Apr 29 22:01:04.608: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Apr 29 22:01:04.628: INFO: observed event type ADDED Apr 29 22:01:04.628: INFO: observed event type MODIFIED Apr 29 22:01:04.628: INFO: observed event type MODIFIED Apr 29 22:01:04.628: INFO: observed event type MODIFIED Apr 29 22:01:04.628: INFO: observed event type MODIFIED Apr 29 22:01:04.628: INFO: observed event type MODIFIED Apr 29 22:01:04.628: INFO: observed event type MODIFIED Apr 29 22:01:04.628: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:04.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4387" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":15,"skipped":314,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:04.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-4f2e29ec-c00a-4f43-9477-34ec34058f09 STEP: Creating a pod to test consume secrets Apr 29 22:01:04.178: INFO: Waiting up to 5m0s for pod "pod-secrets-da627fdc-cf30-4e54-bcc2-e6ebc1c2c518" in namespace "secrets-612" to be "Succeeded or Failed" Apr 29 22:01:04.181: INFO: Pod "pod-secrets-da627fdc-cf30-4e54-bcc2-e6ebc1c2c518": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311674ms Apr 29 22:01:06.184: INFO: Pod "pod-secrets-da627fdc-cf30-4e54-bcc2-e6ebc1c2c518": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005891376s Apr 29 22:01:08.188: INFO: Pod "pod-secrets-da627fdc-cf30-4e54-bcc2-e6ebc1c2c518": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010054468s STEP: Saw pod success Apr 29 22:01:08.189: INFO: Pod "pod-secrets-da627fdc-cf30-4e54-bcc2-e6ebc1c2c518" satisfied condition "Succeeded or Failed" Apr 29 22:01:08.190: INFO: Trying to get logs from node node1 pod pod-secrets-da627fdc-cf30-4e54-bcc2-e6ebc1c2c518 container secret-volume-test: STEP: delete the pod Apr 29 22:01:08.205: INFO: Waiting for pod pod-secrets-da627fdc-cf30-4e54-bcc2-e6ebc1c2c518 to disappear Apr 29 22:01:08.207: INFO: Pod pod-secrets-da627fdc-cf30-4e54-bcc2-e6ebc1c2c518 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:08.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-612" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":410,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:04.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 22:01:04.705: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7371804-cae7-41ac-a892-0b45a7ebf952" in namespace "downward-api-2552" to be "Succeeded or Failed" Apr 29 22:01:04.707: INFO: Pod "downwardapi-volume-f7371804-cae7-41ac-a892-0b45a7ebf952": Phase="Pending", Reason="", readiness=false. Elapsed: 1.863772ms Apr 29 22:01:06.712: INFO: Pod "downwardapi-volume-f7371804-cae7-41ac-a892-0b45a7ebf952": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006142541s Apr 29 22:01:08.718: INFO: Pod "downwardapi-volume-f7371804-cae7-41ac-a892-0b45a7ebf952": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012158084s STEP: Saw pod success Apr 29 22:01:08.718: INFO: Pod "downwardapi-volume-f7371804-cae7-41ac-a892-0b45a7ebf952" satisfied condition "Succeeded or Failed" Apr 29 22:01:08.721: INFO: Trying to get logs from node node1 pod downwardapi-volume-f7371804-cae7-41ac-a892-0b45a7ebf952 container client-container: STEP: delete the pod Apr 29 22:01:08.734: INFO: Waiting for pod downwardapi-volume-f7371804-cae7-41ac-a892-0b45a7ebf952 to disappear Apr 29 22:01:08.736: INFO: Pod downwardapi-volume-f7371804-cae7-41ac-a892-0b45a7ebf952 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:08.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2552" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":331,"failed":0} SSSSS ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":17,"skipped":221,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:54.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 22:00:54.660: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 22:00:56.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866454, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866454, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866454, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866454, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:00:58.676: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866454, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866454, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866454, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866454, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:01:01.680: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:01:01.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:09.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2866" for this suite. STEP: Destroying namespace "webhook-2866-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.550 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":18,"skipped":221,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:08.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 29 22:01:08.294: INFO: Waiting up to 5m0s for pod "pod-5b3104d3-2100-480d-9a5c-95f38e4e0abb" in namespace "emptydir-5222" to be "Succeeded or Failed" Apr 29 22:01:08.300: INFO: Pod "pod-5b3104d3-2100-480d-9a5c-95f38e4e0abb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.930059ms Apr 29 22:01:10.303: INFO: Pod "pod-5b3104d3-2100-480d-9a5c-95f38e4e0abb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008763317s Apr 29 22:01:12.307: INFO: Pod "pod-5b3104d3-2100-480d-9a5c-95f38e4e0abb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012700332s STEP: Saw pod success Apr 29 22:01:12.307: INFO: Pod "pod-5b3104d3-2100-480d-9a5c-95f38e4e0abb" satisfied condition "Succeeded or Failed" Apr 29 22:01:12.310: INFO: Trying to get logs from node node2 pod pod-5b3104d3-2100-480d-9a5c-95f38e4e0abb container test-container: STEP: delete the pod Apr 29 22:01:12.324: INFO: Waiting for pod pod-5b3104d3-2100-480d-9a5c-95f38e4e0abb to disappear Apr 29 22:01:12.326: INFO: Pod pod-5b3104d3-2100-480d-9a5c-95f38e4e0abb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:12.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5222" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:20.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 29 22:00:20.290: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-87 f3339c1d-6984-450b-80d0-b9d7e02d3a26 37442 0 2022-04-29 22:00:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 22:00:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 22:00:20.290: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-87 f3339c1d-6984-450b-80d0-b9d7e02d3a26 37442 0 2022-04-29 22:00:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 22:00:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 29 22:00:30.299: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-87 f3339c1d-6984-450b-80d0-b9d7e02d3a26 37745 0 2022-04-29 22:00:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 22:00:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 22:00:30.299: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-87 f3339c1d-6984-450b-80d0-b9d7e02d3a26 37745 0 2022-04-29 22:00:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 22:00:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 29 22:00:40.308: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-87 f3339c1d-6984-450b-80d0-b9d7e02d3a26 38024 0 2022-04-29 22:00:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 22:00:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 22:00:40.308: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-87 f3339c1d-6984-450b-80d0-b9d7e02d3a26 38024 0 2022-04-29 22:00:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 22:00:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 29 22:00:50.314: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-87 f3339c1d-6984-450b-80d0-b9d7e02d3a26 38239 0 2022-04-29 22:00:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 22:00:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 22:00:50.314: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-87 f3339c1d-6984-450b-80d0-b9d7e02d3a26 38239 0 2022-04-29 22:00:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 22:00:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 29 22:01:00.323: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-87 6ea2cae4-e0f1-4f0a-a19d-a13aed77b775 38498 0 2022-04-29 22:01:00 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-29 22:01:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 22:01:00.323: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-87 6ea2cae4-e0f1-4f0a-a19d-a13aed77b775 38498 0 2022-04-29 22:01:00 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-29 22:01:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 29 22:01:10.327: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-87 6ea2cae4-e0f1-4f0a-a19d-a13aed77b775 38759 0 2022-04-29 22:01:00 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-29 22:01:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 22:01:10.327: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-87 6ea2cae4-e0f1-4f0a-a19d-a13aed77b775 38759 0 2022-04-29 22:01:00 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-29 22:01:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:20.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-87" for this suite. • [SLOW TEST:60.069 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":19,"skipped":450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:09.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:20.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7734" for this suite. • [SLOW TEST:11.059 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":19,"skipped":231,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:45.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-a93e8e10-eb13-4a97-a03a-31642fca10d5 STEP: Creating the pod Apr 29 21:59:45.296: INFO: The status of Pod pod-projected-configmaps-c73fa94d-0d9f-4473-b74d-f8ee81042763 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:47.299: INFO: The status of Pod pod-projected-configmaps-c73fa94d-0d9f-4473-b74d-f8ee81042763 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:49.300: INFO: The status of Pod pod-projected-configmaps-c73fa94d-0d9f-4473-b74d-f8ee81042763 is Pending, waiting for it to be Running (with Ready = true) Apr 29 21:59:51.300: INFO: The status of Pod pod-projected-configmaps-c73fa94d-0d9f-4473-b74d-f8ee81042763 is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-a93e8e10-eb13-4a97-a03a-31642fca10d5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:21.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4148" for this suite. • [SLOW TEST:95.776 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":256,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:08.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Apr 29 22:01:09.112: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 22:01:09.123: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 22:01:11.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866469, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866469, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866469, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866469, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:01:14.143: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:01:14.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3757-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:22.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6122" for this suite. STEP: Destroying namespace "webhook-6122-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.548 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":17,"skipped":336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:20.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Apr 29 22:01:22.968: INFO: running pods: 0 < 3 Apr 29 22:01:24.974: INFO: running pods: 0 < 3 Apr 29 22:01:26.974: INFO: running pods: 0 < 3 Apr 29 22:01:28.974: INFO: running pods: 0 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:30.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6242" for this suite. • [SLOW TEST:10.112 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":20,"skipped":233,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:21.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics Apr 29 22:01:31.128: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Apr 29 22:01:31.195: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:31.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7514" for this suite. • [SLOW TEST:10.141 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":13,"skipped":270,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:22.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4444 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4444;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4444 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4444;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4444.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4444.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4444.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4444.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4444.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4444.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4444.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4444.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4444.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4444.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4444.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4444.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4444.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 223.4.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.4.223_udp@PTR;check="$$(dig +tcp +noall +answer +search 223.4.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.4.223_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4444 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4444;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4444 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4444;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4444.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4444.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4444.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4444.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4444.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4444.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4444.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4444.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4444.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4444.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4444.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4444.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4444.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 223.4.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.4.223_udp@PTR;check="$$(dig +tcp +noall +answer +search 223.4.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.4.223_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 22:01:30.540: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.543: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.546: INFO: Unable to read wheezy_udp@dns-test-service.dns-4444 from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.549: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4444 from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.552: INFO: Unable to read wheezy_udp@dns-test-service.dns-4444.svc from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.555: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4444.svc from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.558: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4444.svc from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.561: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4444.svc from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.582: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.584: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.587: INFO: Unable to read jessie_udp@dns-test-service.dns-4444 from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.590: INFO: Unable to read jessie_tcp@dns-test-service.dns-4444 from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.592: INFO: Unable to read jessie_udp@dns-test-service.dns-4444.svc from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.595: INFO: Unable to read jessie_tcp@dns-test-service.dns-4444.svc from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.597: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4444.svc from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.600: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4444.svc from pod dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af: the server could not find the requested resource (get pods dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af) Apr 29 22:01:30.614: INFO: Lookups using dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4444 wheezy_tcp@dns-test-service.dns-4444 wheezy_udp@dns-test-service.dns-4444.svc wheezy_tcp@dns-test-service.dns-4444.svc wheezy_udp@_http._tcp.dns-test-service.dns-4444.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4444.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4444 jessie_tcp@dns-test-service.dns-4444 jessie_udp@dns-test-service.dns-4444.svc jessie_tcp@dns-test-service.dns-4444.svc jessie_udp@_http._tcp.dns-test-service.dns-4444.svc jessie_tcp@_http._tcp.dns-test-service.dns-4444.svc] Apr 29 22:01:35.687: INFO: DNS probes using dns-4444/dns-test-96afbe34-efb5-4eb8-bd2b-8523978922af succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:35.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4444" for this suite. • [SLOW TEST:13.249 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":18,"skipped":420,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:35.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-7247 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7247 to expose endpoints map[] Apr 29 22:01:35.782: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Apr 29 22:01:36.790: INFO: successfully validated that service endpoint-test2 in namespace services-7247 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-7247 Apr 29 22:01:36.803: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:01:38.807: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:01:40.806: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7247 to expose endpoints map[pod1:[80]] Apr 29 22:01:40.817: INFO: successfully validated that service endpoint-test2 in namespace services-7247 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-7247 Apr 29 22:01:40.828: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:01:42.832: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:01:44.834: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7247 to expose endpoints map[pod1:[80] pod2:[80]] Apr 29 22:01:44.848: INFO: successfully validated that service endpoint-test2 in namespace services-7247 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-7247 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7247 to expose endpoints map[pod2:[80]] Apr 29 22:01:44.866: INFO: successfully validated that service endpoint-test2 in namespace services-7247 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-7247 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7247 to expose endpoints map[] Apr 29 22:01:44.878: INFO: successfully validated that service endpoint-test2 in namespace services-7247 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:44.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7247" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:9.139 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":19,"skipped":436,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:31.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:01:31.035: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 29 22:01:39.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5318 --namespace=crd-publish-openapi-5318 create -f -' Apr 29 22:01:39.669: INFO: stderr: "" Apr 29 22:01:39.669: INFO: stdout: "e2e-test-crd-publish-openapi-9214-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 29 22:01:39.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5318 --namespace=crd-publish-openapi-5318 delete e2e-test-crd-publish-openapi-9214-crds test-foo' Apr 29 22:01:39.837: INFO: stderr: "" Apr 29 22:01:39.837: INFO: stdout: "e2e-test-crd-publish-openapi-9214-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 29 22:01:39.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5318 --namespace=crd-publish-openapi-5318 apply -f -' Apr 29 22:01:40.186: INFO: stderr: "" Apr 29 22:01:40.186: INFO: stdout: "e2e-test-crd-publish-openapi-9214-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 29 22:01:40.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5318 --namespace=crd-publish-openapi-5318 delete e2e-test-crd-publish-openapi-9214-crds test-foo' Apr 29 22:01:40.349: INFO: stderr: "" Apr 29 22:01:40.349: INFO: stdout: "e2e-test-crd-publish-openapi-9214-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 29 22:01:40.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5318 --namespace=crd-publish-openapi-5318 create -f -' Apr 29 22:01:40.640: INFO: rc: 1 Apr 29 22:01:40.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5318 --namespace=crd-publish-openapi-5318 apply -f -' Apr 29 22:01:40.961: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 29 22:01:40.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5318 --namespace=crd-publish-openapi-5318 create -f -' Apr 29 22:01:41.249: INFO: rc: 1 Apr 29 22:01:41.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5318 --namespace=crd-publish-openapi-5318 apply -f -' Apr 29 22:01:41.550: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 29 22:01:41.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5318 explain e2e-test-crd-publish-openapi-9214-crds' Apr 29 22:01:41.922: INFO: stderr: "" Apr 29 22:01:41.922: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9214-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 29 22:01:41.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5318 explain e2e-test-crd-publish-openapi-9214-crds.metadata' Apr 29 22:01:42.290: INFO: stderr: "" Apr 29 22:01:42.290: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9214-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 29 22:01:42.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5318 explain e2e-test-crd-publish-openapi-9214-crds.spec' Apr 29 22:01:42.643: INFO: stderr: "" Apr 29 22:01:42.643: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9214-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 29 22:01:42.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5318 explain e2e-test-crd-publish-openapi-9214-crds.spec.bars' Apr 29 22:01:42.997: INFO: stderr: "" Apr 29 22:01:42.997: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9214-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 29 22:01:42.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5318 explain e2e-test-crd-publish-openapi-9214-crds.spec.bars2' Apr 29 22:01:43.344: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:47.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5318" for this suite. • [SLOW TEST:16.014 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":21,"skipped":244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:44.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-0aa7fc8e-c430-489e-bf03-a771a2fbca43 STEP: Creating a pod to test consume secrets Apr 29 22:01:44.931: INFO: Waiting up to 5m0s for pod "pod-secrets-f8296573-9d05-4a91-81fd-90a3e87d7f0c" in namespace "secrets-5898" to be "Succeeded or Failed" Apr 29 22:01:44.934: INFO: Pod "pod-secrets-f8296573-9d05-4a91-81fd-90a3e87d7f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193866ms Apr 29 22:01:46.936: INFO: Pod "pod-secrets-f8296573-9d05-4a91-81fd-90a3e87d7f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004858721s Apr 29 22:01:48.941: INFO: Pod "pod-secrets-f8296573-9d05-4a91-81fd-90a3e87d7f0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009538854s STEP: Saw pod success Apr 29 22:01:48.941: INFO: Pod "pod-secrets-f8296573-9d05-4a91-81fd-90a3e87d7f0c" satisfied condition "Succeeded or Failed" Apr 29 22:01:48.944: INFO: Trying to get logs from node node2 pod pod-secrets-f8296573-9d05-4a91-81fd-90a3e87d7f0c container secret-volume-test: STEP: delete the pod Apr 29 22:01:48.958: INFO: Waiting for pod pod-secrets-f8296573-9d05-4a91-81fd-90a3e87d7f0c to disappear Apr 29 22:01:48.960: INFO: Pod pod-secrets-f8296573-9d05-4a91-81fd-90a3e87d7f0c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:48.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5898" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":437,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:48.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:01:49.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6222 version' Apr 29 22:01:49.117: INFO: stderr: "" Apr 29 22:01:49.117: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.9\", GitCommit:\"b631974d68ac5045e076c86a5c66fba6f128dc72\", GitTreeState:\"clean\", BuildDate:\"2022-01-19T17:51:12Z\", GoVersion:\"go1.16.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:12:29Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:49.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6222" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":21,"skipped":443,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:49.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy Apr 29 22:01:49.162: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4505 proxy --unix-socket=/tmp/kubectl-proxy-unix776556635/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:49.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4505" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":22,"skipped":447,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:57:43.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-d0ee7a71-b661-4981-aba1-d11fd8177fe9 in namespace container-probe-5587 Apr 29 21:57:49.559: INFO: Started pod busybox-d0ee7a71-b661-4981-aba1-d11fd8177fe9 in namespace container-probe-5587 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 21:57:49.562: INFO: Initial restart count of pod busybox-d0ee7a71-b661-4981-aba1-d11fd8177fe9 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:50.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5587" for this suite. • [SLOW TEST:246.531 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":24,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:47.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-eff806d4-c5ee-4c3e-9a4e-1032ba12347b STEP: Creating a pod to test consume configMaps Apr 29 22:01:47.127: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-70df108d-1a95-4933-ba9d-d074a19704be" in namespace "projected-1609" to be "Succeeded or Failed" Apr 29 22:01:47.130: INFO: Pod "pod-projected-configmaps-70df108d-1a95-4933-ba9d-d074a19704be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184842ms Apr 29 22:01:49.133: INFO: Pod "pod-projected-configmaps-70df108d-1a95-4933-ba9d-d074a19704be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005209594s Apr 29 22:01:51.137: INFO: Pod "pod-projected-configmaps-70df108d-1a95-4933-ba9d-d074a19704be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009375092s STEP: Saw pod success Apr 29 22:01:51.137: INFO: Pod "pod-projected-configmaps-70df108d-1a95-4933-ba9d-d074a19704be" satisfied condition "Succeeded or Failed" Apr 29 22:01:51.140: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-70df108d-1a95-4933-ba9d-d074a19704be container projected-configmap-volume-test: STEP: delete the pod Apr 29 22:01:51.154: INFO: Waiting for pod pod-projected-configmaps-70df108d-1a95-4933-ba9d-d074a19704be to disappear Apr 29 22:01:51.157: INFO: Pod pod-projected-configmaps-70df108d-1a95-4933-ba9d-d074a19704be no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:51.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1609" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:50.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-db3133a0-6274-46bf-98e0-43a52162c2b1 STEP: Creating a pod to test consume configMaps Apr 29 22:01:50.113: INFO: Waiting up to 5m0s for pod "pod-configmaps-f57f2317-500b-45cc-a244-05a9050398c4" in namespace "configmap-5498" to be "Succeeded or Failed" Apr 29 22:01:50.116: INFO: Pod "pod-configmaps-f57f2317-500b-45cc-a244-05a9050398c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.645378ms Apr 29 22:01:52.118: INFO: Pod "pod-configmaps-f57f2317-500b-45cc-a244-05a9050398c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005438711s Apr 29 22:01:54.123: INFO: Pod "pod-configmaps-f57f2317-500b-45cc-a244-05a9050398c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01000464s STEP: Saw pod success Apr 29 22:01:54.123: INFO: Pod "pod-configmaps-f57f2317-500b-45cc-a244-05a9050398c4" satisfied condition "Succeeded or Failed" Apr 29 22:01:54.125: INFO: Trying to get logs from node node2 pod pod-configmaps-f57f2317-500b-45cc-a244-05a9050398c4 container agnhost-container: STEP: delete the pod Apr 29 22:01:54.139: INFO: Waiting for pod pod-configmaps-f57f2317-500b-45cc-a244-05a9050398c4 to disappear Apr 29 22:01:54.141: INFO: Pod pod-configmaps-f57f2317-500b-45cc-a244-05a9050398c4 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:54.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5498" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":431,"failed":0} [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:12.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Apr 29 22:01:16.376: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3651 PodName:var-expansion-dc9ede44-9dee-4883-b41c-7e5d6654e9a9 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 22:01:16.376: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Apr 29 22:01:16.761: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3651 PodName:var-expansion-dc9ede44-9dee-4883-b41c-7e5d6654e9a9 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 22:01:16.761: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Apr 29 22:01:17.348: INFO: Successfully updated pod "var-expansion-dc9ede44-9dee-4883-b41c-7e5d6654e9a9" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Apr 29 22:01:17.351: INFO: Deleting pod "var-expansion-dc9ede44-9dee-4883-b41c-7e5d6654e9a9" in namespace "var-expansion-3651" Apr 29 22:01:17.355: INFO: Wait up to 5m0s for pod "var-expansion-dc9ede44-9dee-4883-b41c-7e5d6654e9a9" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:55.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3651" for this suite. • [SLOW TEST:43.033 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:49.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-ed044f69-8299-4933-abe7-b4bfc3d49bc5 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:01:55.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7732" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:51.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-7415 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7415 to expose endpoints map[] Apr 29 22:01:51.312: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Apr 29 22:01:52.319: INFO: successfully validated that service multi-endpoint-test in namespace services-7415 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-7415 Apr 29 22:01:52.332: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:01:54.336: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:01:56.336: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7415 to expose endpoints map[pod1:[100]] Apr 29 22:01:56.348: INFO: successfully validated that service multi-endpoint-test in namespace services-7415 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-7415 Apr 29 22:01:56.362: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:01:58.366: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:02:00.368: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7415 to expose endpoints map[pod1:[100] pod2:[101]] Apr 29 22:02:00.383: INFO: successfully validated that service multi-endpoint-test in namespace services-7415 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-7415 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7415 to expose endpoints map[pod2:[101]] Apr 29 22:02:00.398: INFO: successfully validated that service multi-endpoint-test in namespace services-7415 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-7415 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7415 to expose endpoints map[] Apr 29 22:02:00.409: INFO: successfully validated that service multi-endpoint-test in namespace services-7415 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:00.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7415" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:9.152 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":23,"skipped":334,"failed":0} SSSSS ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":22,"skipped":431,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:55.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 22:01:55.712: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 22:01:57.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866515, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866515, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866515, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866515, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:02:00.732: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:01.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8946" for this suite. STEP: Destroying namespace "webhook-8946-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.454 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":23,"skipped":431,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:00.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium Apr 29 22:02:00.470: INFO: Waiting up to 5m0s for pod "pod-16b54305-376c-4b04-83d0-5e0cb37afa87" in namespace "emptydir-6221" to be "Succeeded or Failed" Apr 29 22:02:00.477: INFO: Pod "pod-16b54305-376c-4b04-83d0-5e0cb37afa87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.622951ms Apr 29 22:02:02.480: INFO: Pod "pod-16b54305-376c-4b04-83d0-5e0cb37afa87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009869998s Apr 29 22:02:04.484: INFO: Pod "pod-16b54305-376c-4b04-83d0-5e0cb37afa87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013948861s STEP: Saw pod success Apr 29 22:02:04.484: INFO: Pod "pod-16b54305-376c-4b04-83d0-5e0cb37afa87" satisfied condition "Succeeded or Failed" Apr 29 22:02:04.487: INFO: Trying to get logs from node node1 pod pod-16b54305-376c-4b04-83d0-5e0cb37afa87 container test-container: STEP: delete the pod Apr 29 22:02:04.499: INFO: Waiting for pod pod-16b54305-376c-4b04-83d0-5e0cb37afa87 to disappear Apr 29 22:02:04.501: INFO: Pod pod-16b54305-376c-4b04-83d0-5e0cb37afa87 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:04.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6221" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":339,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:04.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:04.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-8982" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":25,"skipped":341,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:01.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 22:02:02.186: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 22:02:04.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866522, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866522, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866522, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866522, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:02:07.207: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:07.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6302" for this suite. STEP: Destroying namespace "webhook-6302-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.412 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":24,"skipped":455,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:07.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:11.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4473" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":457,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:11.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server Apr 29 22:02:11.372: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1674 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:11.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1674" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":26,"skipped":464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:55.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod Apr 29 22:01:55.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2933 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 29 22:01:55.630: INFO: stderr: "" Apr 29 22:01:55.630: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. Apr 29 22:01:55.630: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 29 22:01:55.630: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2933" to be "running and ready, or succeeded" Apr 29 22:01:55.633: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30767ms Apr 29 22:01:57.636: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005891293s Apr 29 22:01:59.640: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.009943436s Apr 29 22:01:59.640: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 29 22:01:59.640: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 29 22:01:59.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2933 logs logs-generator logs-generator' Apr 29 22:01:59.794: INFO: stderr: "" Apr 29 22:01:59.794: INFO: stdout: "I0429 22:01:57.688693 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/b8c 410\nI0429 22:01:57.889798 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/stnd 490\nI0429 22:01:58.089244 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/8hvb 452\nI0429 22:01:58.289659 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/lkh 503\nI0429 22:01:58.489051 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/jcd 344\nI0429 22:01:58.689180 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/spzn 439\nI0429 22:01:58.888746 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/k6p 572\nI0429 22:01:59.089494 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/562 288\nI0429 22:01:59.289287 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/266 535\nI0429 22:01:59.489765 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/4lkm 548\nI0429 22:01:59.689375 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/2db 502\n" STEP: limiting log lines Apr 29 22:01:59.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2933 logs logs-generator logs-generator --tail=1' Apr 29 22:01:59.957: INFO: stderr: "" Apr 29 22:01:59.957: INFO: stdout: "I0429 22:01:59.888930 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/ddh 445\n" Apr 29 22:01:59.957: INFO: got output "I0429 22:01:59.888930 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/ddh 445\n" STEP: limiting log bytes Apr 29 22:01:59.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2933 logs logs-generator logs-generator --limit-bytes=1' Apr 29 22:02:00.107: INFO: stderr: "" Apr 29 22:02:00.107: INFO: stdout: "I" Apr 29 22:02:00.107: INFO: got output "I" STEP: exposing timestamps Apr 29 22:02:00.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2933 logs logs-generator logs-generator --tail=1 --timestamps' Apr 29 22:02:00.267: INFO: stderr: "" Apr 29 22:02:00.267: INFO: stdout: "2022-04-29T22:02:00.089554321Z I0429 22:02:00.089385 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/74t 364\n" Apr 29 22:02:00.267: INFO: got output "2022-04-29T22:02:00.089554321Z I0429 22:02:00.089385 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/74t 364\n" STEP: restricting to a time range Apr 29 22:02:02.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2933 logs logs-generator logs-generator --since=1s' Apr 29 22:02:02.938: INFO: stderr: "" Apr 29 22:02:02.938: INFO: stdout: "I0429 22:02:02.113587 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/zgsw 265\nI0429 22:02:02.288825 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/5w7 441\nI0429 22:02:02.489021 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/nhc 249\nI0429 22:02:02.689275 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/xktv 456\nI0429 22:02:02.889578 1 logs_generator.go:76] 26 POST /api/v1/namespaces/default/pods/8nr 275\n" Apr 29 22:02:02.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2933 logs logs-generator logs-generator --since=24h' Apr 29 22:02:03.094: INFO: stderr: "" Apr 29 22:02:03.094: INFO: stdout: "I0429 22:01:57.688693 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/b8c 410\nI0429 22:01:57.889798 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/stnd 490\nI0429 22:01:58.089244 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/8hvb 452\nI0429 22:01:58.289659 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/lkh 503\nI0429 22:01:58.489051 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/jcd 344\nI0429 22:01:58.689180 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/spzn 439\nI0429 22:01:58.888746 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/k6p 572\nI0429 22:01:59.089494 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/562 288\nI0429 22:01:59.289287 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/266 535\nI0429 22:01:59.489765 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/4lkm 548\nI0429 22:01:59.689375 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/2db 502\nI0429 22:01:59.888930 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/ddh 445\nI0429 22:02:00.089385 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/74t 364\nI0429 22:02:00.288720 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/tbv 366\nI0429 22:02:00.489160 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/zm6n 448\nI0429 22:02:00.689550 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/7m2q 576\nI0429 22:02:00.889113 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/gww4 580\nI0429 22:02:01.089551 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/5qw 475\nI0429 22:02:01.288845 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/kxbs 351\nI0429 22:02:01.489177 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/hdzx 218\nI0429 22:02:01.689476 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/5t2r 453\nI0429 22:02:01.888706 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/vhn 444\nI0429 22:02:02.113587 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/zgsw 265\nI0429 22:02:02.288825 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/5w7 441\nI0429 22:02:02.489021 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/nhc 249\nI0429 22:02:02.689275 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/xktv 456\nI0429 22:02:02.889578 1 logs_generator.go:76] 26 POST /api/v1/namespaces/default/pods/8nr 275\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 Apr 29 22:02:03.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2933 delete pod logs-generator' Apr 29 22:02:15.210: INFO: stderr: "" Apr 29 22:02:15.210: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:15.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2933" for this suite. • [SLOW TEST:19.757 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":24,"skipped":521,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:15.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:02:15.333: INFO: Creating simple deployment test-new-deployment Apr 29 22:02:15.341: INFO: deployment "test-new-deployment" doesn't have the required revision set Apr 29 22:02:17.349: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866535, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866535, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866535, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866535, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Apr 29 22:02:19.374: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-3899 8d94abc9-03b1-45f2-ab45-bb047e51ba45 40243 3 2022-04-29 22:02:15 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-04-29 22:02:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-29 22:02:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0033e0cb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-29 22:02:17 +0000 UTC,LastTransitionTime:2022-04-29 22:02:17 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2022-04-29 22:02:17 +0000 UTC,LastTransitionTime:2022-04-29 22:02:15 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 29 22:02:19.377: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-3899 f8de94b5-8091-4f74-ad5b-73be516e6eda 40245 3 2022-04-29 22:02:15 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 8d94abc9-03b1-45f2-ab45-bb047e51ba45 0xc0033e10a7 0xc0033e10a8}] [] [{kube-controller-manager Update apps/v1 2022-04-29 22:02:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8d94abc9-03b1-45f2-ab45-bb047e51ba45\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0033e1118 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 29 22:02:19.381: INFO: Pod "test-new-deployment-847dcfb7fb-qwdhq" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-qwdhq test-new-deployment-847dcfb7fb- deployment-3899 696a2622-b66e-43e7-bc53-71da8477d792 40229 0 2022-04-29 22:02:15 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.101" ], "mac": "3a:53:13:54:5f:96", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.101" ], "mac": "3a:53:13:54:5f:96", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb f8de94b5-8091-4f74-ad5b-73be516e6eda 0xc003345c8f 0xc003345ca0}] [] [{kube-controller-manager Update v1 2022-04-29 22:02:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8de94b5-8091-4f74-ad5b-73be516e6eda\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-29 22:02:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-29 22:02:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.101\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t22dc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t22dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:02:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:02:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:02:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:02:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.101,StartTime:2022-04-29 22:02:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 22:02:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://46bfbe1b4343aae302c129eac51cbed24165c219218af1eb33d57e4d32ef5324,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.101,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:02:19.382: INFO: Pod "test-new-deployment-847dcfb7fb-tvwjp" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-tvwjp test-new-deployment-847dcfb7fb- deployment-3899 36b187fe-37b1-4318-b736-29e1dbbddd10 40250 0 2022-04-29 22:02:19 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb f8de94b5-8091-4f74-ad5b-73be516e6eda 0xc003345e8f 0xc003345ea0}] [] [{kube-controller-manager Update v1 2022-04-29 22:02:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8de94b5-8091-4f74-ad5b-73be516e6eda\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7tn9k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7tn9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:02:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:19.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3899" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":25,"skipped":573,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:04.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:20.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8497" for this suite. • [SLOW TEST:16.110 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":26,"skipped":348,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:20.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-788ce0ab-2a12-49b6-ad15-17f4671e4fc6 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:20.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-441" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":27,"skipped":361,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:58:19.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-a27079a1-efa7-48d1-bda3-f2ce509640bb in namespace container-probe-5851 Apr 29 21:58:25.446: INFO: Started pod liveness-a27079a1-efa7-48d1-bda3-f2ce509640bb in namespace container-probe-5851 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 21:58:25.449: INFO: Initial restart count of pod liveness-a27079a1-efa7-48d1-bda3-f2ce509640bb is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:25.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5851" for this suite. • [SLOW TEST:246.527 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":7,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:20.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:02:20.772: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-45fa4fd2-7fe3-496f-8012-5c228d55ba43" in namespace "security-context-test-9959" to be "Succeeded or Failed" Apr 29 22:02:20.774: INFO: Pod "alpine-nnp-false-45fa4fd2-7fe3-496f-8012-5c228d55ba43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259228ms Apr 29 22:02:22.779: INFO: Pod "alpine-nnp-false-45fa4fd2-7fe3-496f-8012-5c228d55ba43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006621521s Apr 29 22:02:24.783: INFO: Pod "alpine-nnp-false-45fa4fd2-7fe3-496f-8012-5c228d55ba43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010577565s Apr 29 22:02:26.787: INFO: Pod "alpine-nnp-false-45fa4fd2-7fe3-496f-8012-5c228d55ba43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014594514s Apr 29 22:02:26.787: INFO: Pod "alpine-nnp-false-45fa4fd2-7fe3-496f-8012-5c228d55ba43" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:26.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9959" for this suite. • [SLOW TEST:6.061 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":364,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:26.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 29 22:02:30.861: INFO: &Pod{ObjectMeta:{send-events-7a1f050b-4211-4763-b5bf-d74e67e7ecae events-1122 cf6b5ea2-f35f-405a-9bcd-4714714bc210 40482 0 2022-04-29 22:02:26 +0000 UTC map[name:foo time:832794602] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.174" ], "mac": "7e:ad:81:cf:55:f7", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.174" ], "mac": "7e:ad:81:cf:55:f7", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-04-29 22:02:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-29 22:02:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-29 22:02:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.174\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gzzs4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gzzs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:02:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:02:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:02:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:02:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.174,StartTime:2022-04-29 22:02:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 22:02:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://53a0ffc1bb6c96bcd6e244ec87bcb309f48c7a03af25009232b439050b5098d7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.174,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 29 22:02:32.866: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 29 22:02:34.871: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:34.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1122" for this suite. • [SLOW TEST:8.070 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":29,"skipped":369,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:34.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:34.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5027" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":30,"skipped":380,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:25.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8990.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8990.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8990.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8990.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8990.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8990.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8990.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8990.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8990.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 98.8.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.8.98_udp@PTR;check="$$(dig +tcp +noall +answer +search 98.8.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.8.98_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8990.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8990.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8990.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8990.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8990.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8990.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8990.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8990.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8990.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 98.8.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.8.98_udp@PTR;check="$$(dig +tcp +noall +answer +search 98.8.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.8.98_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 22:02:32.008: INFO: Unable to read wheezy_udp@dns-test-service.dns-8990.svc.cluster.local from pod dns-8990/dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894: the server could not find the requested resource (get pods dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894) Apr 29 22:02:32.010: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8990.svc.cluster.local from pod dns-8990/dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894: the server could not find the requested resource (get pods dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894) Apr 29 22:02:32.013: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8990.svc.cluster.local from pod dns-8990/dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894: the server could not find the requested resource (get pods dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894) Apr 29 22:02:32.017: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8990.svc.cluster.local from pod dns-8990/dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894: the server could not find the requested resource (get pods dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894) Apr 29 22:02:32.035: INFO: Unable to read jessie_udp@dns-test-service.dns-8990.svc.cluster.local from pod dns-8990/dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894: the server could not find the requested resource (get pods dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894) Apr 29 22:02:32.038: INFO: Unable to read jessie_tcp@dns-test-service.dns-8990.svc.cluster.local from pod dns-8990/dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894: the server could not find the requested resource (get pods dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894) Apr 29 22:02:32.040: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8990.svc.cluster.local from pod dns-8990/dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894: the server could not find the requested resource (get pods dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894) Apr 29 22:02:32.044: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8990.svc.cluster.local from pod dns-8990/dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894: the server could not find the requested resource (get pods dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894) Apr 29 22:02:32.060: INFO: Lookups using dns-8990/dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894 failed for: [wheezy_udp@dns-test-service.dns-8990.svc.cluster.local wheezy_tcp@dns-test-service.dns-8990.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8990.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8990.svc.cluster.local jessie_udp@dns-test-service.dns-8990.svc.cluster.local jessie_tcp@dns-test-service.dns-8990.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8990.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8990.svc.cluster.local] Apr 29 22:02:37.115: INFO: DNS probes using dns-8990/dns-test-2dd0bb97-da2c-4c66-89d6-a589de32b894 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:37.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8990" for this suite. • [SLOW TEST:11.199 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:35.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:02:35.070: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 29 22:02:40.075: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Apr 29 22:02:40.082: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet Apr 29 22:02:40.088: INFO: observed ReplicaSet test-rs in namespace replicaset-8238 with ReadyReplicas 1, AvailableReplicas 1 Apr 29 22:02:40.100: INFO: observed ReplicaSet test-rs in namespace replicaset-8238 with ReadyReplicas 1, AvailableReplicas 1 Apr 29 22:02:40.114: INFO: observed ReplicaSet test-rs in namespace replicaset-8238 with ReadyReplicas 1, AvailableReplicas 1 Apr 29 22:02:40.117: INFO: observed ReplicaSet test-rs in namespace replicaset-8238 with ReadyReplicas 1, AvailableReplicas 1 Apr 29 22:02:42.676: INFO: observed ReplicaSet test-rs in namespace replicaset-8238 with ReadyReplicas 2, AvailableReplicas 2 Apr 29 22:02:43.816: INFO: observed Replicaset test-rs in namespace replicaset-8238 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:43.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8238" for this suite. • [SLOW TEST:8.783 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":31,"skipped":436,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:19.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted Apr 29 22:02:39.554: INFO: EndpointSlice for Service endpointslice-9133/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:49.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-9133" for this suite. • [SLOW TEST:30.126 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":26,"skipped":607,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:49.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args Apr 29 22:02:49.611: INFO: Waiting up to 5m0s for pod "var-expansion-0fc9fbc0-bf19-4176-97c0-d5ae78511896" in namespace "var-expansion-6222" to be "Succeeded or Failed" Apr 29 22:02:49.614: INFO: Pod "var-expansion-0fc9fbc0-bf19-4176-97c0-d5ae78511896": Phase="Pending", Reason="", readiness=false. Elapsed: 3.435223ms Apr 29 22:02:51.617: INFO: Pod "var-expansion-0fc9fbc0-bf19-4176-97c0-d5ae78511896": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006532991s Apr 29 22:02:53.622: INFO: Pod "var-expansion-0fc9fbc0-bf19-4176-97c0-d5ae78511896": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010728383s STEP: Saw pod success Apr 29 22:02:53.622: INFO: Pod "var-expansion-0fc9fbc0-bf19-4176-97c0-d5ae78511896" satisfied condition "Succeeded or Failed" Apr 29 22:02:53.625: INFO: Trying to get logs from node node2 pod var-expansion-0fc9fbc0-bf19-4176-97c0-d5ae78511896 container dapi-container: STEP: delete the pod Apr 29 22:02:53.642: INFO: Waiting for pod var-expansion-0fc9fbc0-bf19-4176-97c0-d5ae78511896 to disappear Apr 29 22:02:53.644: INFO: Pod var-expansion-0fc9fbc0-bf19-4176-97c0-d5ae78511896 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:02:53.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6222" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":4,"skipped":12,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:37.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Apr 29 22:02:37.173: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:02.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5019" for this suite. • [SLOW TEST:25.554 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":5,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:11.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-1ab0d3e3-e83b-430b-aa08-95b2afdceca9 in namespace container-probe-7293 Apr 29 22:02:15.571: INFO: Started pod busybox-1ab0d3e3-e83b-430b-aa08-95b2afdceca9 in namespace container-probe-7293 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 22:02:15.574: INFO: Initial restart count of pod busybox-1ab0d3e3-e83b-430b-aa08-95b2afdceca9 is 0 Apr 29 22:03:03.672: INFO: Restart count of pod container-probe-7293/busybox-1ab0d3e3-e83b-430b-aa08-95b2afdceca9 is now 1 (48.097870187s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:03.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7293" for this suite. • [SLOW TEST:52.159 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":488,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:53.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Apr 29 22:02:53.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8914 create -f -' Apr 29 22:02:54.080: INFO: stderr: "" Apr 29 22:02:54.080: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 29 22:02:54.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8914 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 29 22:02:54.251: INFO: stderr: "" Apr 29 22:02:54.251: INFO: stdout: "update-demo-nautilus-2p7qs update-demo-nautilus-cnfdn " Apr 29 22:02:54.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8914 get pods update-demo-nautilus-2p7qs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 29 22:02:54.429: INFO: stderr: "" Apr 29 22:02:54.429: INFO: stdout: "" Apr 29 22:02:54.429: INFO: update-demo-nautilus-2p7qs is created but not running Apr 29 22:02:59.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8914 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 29 22:02:59.598: INFO: stderr: "" Apr 29 22:02:59.598: INFO: stdout: "update-demo-nautilus-2p7qs update-demo-nautilus-cnfdn " Apr 29 22:02:59.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8914 get pods update-demo-nautilus-2p7qs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 29 22:02:59.764: INFO: stderr: "" Apr 29 22:02:59.764: INFO: stdout: "" Apr 29 22:02:59.764: INFO: update-demo-nautilus-2p7qs is created but not running Apr 29 22:03:04.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8914 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 29 22:03:04.942: INFO: stderr: "" Apr 29 22:03:04.942: INFO: stdout: "update-demo-nautilus-2p7qs update-demo-nautilus-cnfdn " Apr 29 22:03:04.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8914 get pods update-demo-nautilus-2p7qs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 29 22:03:05.114: INFO: stderr: "" Apr 29 22:03:05.114: INFO: stdout: "true" Apr 29 22:03:05.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8914 get pods update-demo-nautilus-2p7qs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 29 22:03:05.275: INFO: stderr: "" Apr 29 22:03:05.275: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 29 22:03:05.275: INFO: validating pod update-demo-nautilus-2p7qs Apr 29 22:03:05.278: INFO: got data: { "image": "nautilus.jpg" } Apr 29 22:03:05.278: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 29 22:03:05.278: INFO: update-demo-nautilus-2p7qs is verified up and running Apr 29 22:03:05.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8914 get pods update-demo-nautilus-cnfdn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 29 22:03:05.427: INFO: stderr: "" Apr 29 22:03:05.427: INFO: stdout: "true" Apr 29 22:03:05.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8914 get pods update-demo-nautilus-cnfdn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 29 22:03:05.584: INFO: stderr: "" Apr 29 22:03:05.584: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 29 22:03:05.584: INFO: validating pod update-demo-nautilus-cnfdn Apr 29 22:03:05.595: INFO: got data: { "image": "nautilus.jpg" } Apr 29 22:03:05.595: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 29 22:03:05.595: INFO: update-demo-nautilus-cnfdn is verified up and running STEP: using delete to clean up resources Apr 29 22:03:05.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8914 delete --grace-period=0 --force -f -' Apr 29 22:03:05.721: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 22:03:05.721: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 29 22:03:05.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8914 get rc,svc -l name=update-demo --no-headers' Apr 29 22:03:05.905: INFO: stderr: "No resources found in kubectl-8914 namespace.\n" Apr 29 22:03:05.905: INFO: stdout: "" Apr 29 22:03:05.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8914 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 29 22:03:06.066: INFO: stderr: "" Apr 29 22:03:06.066: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:06.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8914" for this suite. • [SLOW TEST:12.380 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:03.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 22:03:03.746: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b36affe8-03d9-4532-bb98-c94e583d40b4" in namespace "downward-api-9776" to be "Succeeded or Failed" Apr 29 22:03:03.750: INFO: Pod "downwardapi-volume-b36affe8-03d9-4532-bb98-c94e583d40b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053366ms Apr 29 22:03:05.753: INFO: Pod "downwardapi-volume-b36affe8-03d9-4532-bb98-c94e583d40b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006650837s Apr 29 22:03:07.756: INFO: Pod "downwardapi-volume-b36affe8-03d9-4532-bb98-c94e583d40b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009808795s STEP: Saw pod success Apr 29 22:03:07.756: INFO: Pod "downwardapi-volume-b36affe8-03d9-4532-bb98-c94e583d40b4" satisfied condition "Succeeded or Failed" Apr 29 22:03:07.758: INFO: Trying to get logs from node node1 pod downwardapi-volume-b36affe8-03d9-4532-bb98-c94e583d40b4 container client-container: STEP: delete the pod Apr 29 22:03:07.772: INFO: Waiting for pod downwardapi-volume-b36affe8-03d9-4532-bb98-c94e583d40b4 to disappear Apr 29 22:03:07.774: INFO: Pod downwardapi-volume-b36affe8-03d9-4532-bb98-c94e583d40b4 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:07.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9776" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":500,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:02.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:03:02.799: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:08.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-55" for this suite. • [SLOW TEST:5.562 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":6,"skipped":46,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:07.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-e932a1e9-b8ce-4f28-b0ce-1ce051d1ff8c STEP: Creating a pod to test consume secrets Apr 29 22:03:07.820: INFO: Waiting up to 5m0s for pod "pod-secrets-ec536434-8a27-40df-80e9-4483a974c48c" in namespace "secrets-3705" to be "Succeeded or Failed" Apr 29 22:03:07.822: INFO: Pod "pod-secrets-ec536434-8a27-40df-80e9-4483a974c48c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.763563ms Apr 29 22:03:09.826: INFO: Pod "pod-secrets-ec536434-8a27-40df-80e9-4483a974c48c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005715761s Apr 29 22:03:11.830: INFO: Pod "pod-secrets-ec536434-8a27-40df-80e9-4483a974c48c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009699986s STEP: Saw pod success Apr 29 22:03:11.830: INFO: Pod "pod-secrets-ec536434-8a27-40df-80e9-4483a974c48c" satisfied condition "Succeeded or Failed" Apr 29 22:03:11.832: INFO: Trying to get logs from node node1 pod pod-secrets-ec536434-8a27-40df-80e9-4483a974c48c container secret-env-test: STEP: delete the pod Apr 29 22:03:11.846: INFO: Waiting for pod pod-secrets-ec536434-8a27-40df-80e9-4483a974c48c to disappear Apr 29 22:03:11.848: INFO: Pod pod-secrets-ec536434-8a27-40df-80e9-4483a974c48c no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:11.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3705" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":501,"failed":0} SS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":28,"skipped":638,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:06.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 29 22:03:06.114: INFO: Waiting up to 5m0s for pod "pod-70bb3fd0-854f-4fee-9161-1fe0eaa974ac" in namespace "emptydir-977" to be "Succeeded or Failed" Apr 29 22:03:06.122: INFO: Pod "pod-70bb3fd0-854f-4fee-9161-1fe0eaa974ac": Phase="Pending", Reason="", readiness=false. Elapsed: 7.198438ms Apr 29 22:03:08.126: INFO: Pod "pod-70bb3fd0-854f-4fee-9161-1fe0eaa974ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011098652s Apr 29 22:03:10.131: INFO: Pod "pod-70bb3fd0-854f-4fee-9161-1fe0eaa974ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016333407s Apr 29 22:03:12.135: INFO: Pod "pod-70bb3fd0-854f-4fee-9161-1fe0eaa974ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020664834s STEP: Saw pod success Apr 29 22:03:12.135: INFO: Pod "pod-70bb3fd0-854f-4fee-9161-1fe0eaa974ac" satisfied condition "Succeeded or Failed" Apr 29 22:03:12.138: INFO: Trying to get logs from node node2 pod pod-70bb3fd0-854f-4fee-9161-1fe0eaa974ac container test-container: STEP: delete the pod Apr 29 22:03:12.149: INFO: Waiting for pod pod-70bb3fd0-854f-4fee-9161-1fe0eaa974ac to disappear Apr 29 22:03:12.151: INFO: Pod pod-70bb3fd0-854f-4fee-9161-1fe0eaa974ac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:12.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-977" for this suite. • [SLOW TEST:6.083 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":638,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:11.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 29 22:03:11.895: INFO: Waiting up to 5m0s for pod "pod-debb7db9-4168-4fdb-b29e-a5b44a8469a7" in namespace "emptydir-4691" to be "Succeeded or Failed" Apr 29 22:03:11.896: INFO: Pod "pod-debb7db9-4168-4fdb-b29e-a5b44a8469a7": Phase="Pending", Reason="", readiness=false. Elapsed: 1.635891ms Apr 29 22:03:13.899: INFO: Pod "pod-debb7db9-4168-4fdb-b29e-a5b44a8469a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004445908s Apr 29 22:03:15.903: INFO: Pod "pod-debb7db9-4168-4fdb-b29e-a5b44a8469a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007873842s STEP: Saw pod success Apr 29 22:03:15.903: INFO: Pod "pod-debb7db9-4168-4fdb-b29e-a5b44a8469a7" satisfied condition "Succeeded or Failed" Apr 29 22:03:15.908: INFO: Trying to get logs from node node2 pod pod-debb7db9-4168-4fdb-b29e-a5b44a8469a7 container test-container: STEP: delete the pod Apr 29 22:03:15.922: INFO: Waiting for pod pod-debb7db9-4168-4fdb-b29e-a5b44a8469a7 to disappear Apr 29 22:03:15.924: INFO: Pod pod-debb7db9-4168-4fdb-b29e-a5b44a8469a7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:15.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4691" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":503,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:50.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-4148 STEP: creating service affinity-nodeport in namespace services-4148 STEP: creating replication controller affinity-nodeport in namespace services-4148 I0429 22:00:50.780259 32 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-4148, replica count: 3 I0429 22:00:53.832098 32 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:00:56.832948 32 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:00:59.834594 32 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 22:00:59.845: INFO: Creating new exec pod Apr 29 22:01:04.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Apr 29 22:01:05.109: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Apr 29 22:01:05.109: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 29 22:01:05.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.22.207 80' Apr 29 22:01:05.593: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.22.207 80\nConnection to 10.233.22.207 80 port [tcp/http] succeeded!\n" Apr 29 22:01:05.593: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 29 22:01:05.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:05.866: INFO: rc: 1 Apr 29 22:01:05.866: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:06.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:07.163: INFO: rc: 1 Apr 29 22:01:07.163: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:07.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:08.175: INFO: rc: 1 Apr 29 22:01:08.175: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31799 + echo hostName nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:08.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:09.182: INFO: rc: 1 Apr 29 22:01:09.182: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:09.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:10.483: INFO: rc: 1 Apr 29 22:01:10.483: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:10.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:11.110: INFO: rc: 1 Apr 29 22:01:11.110: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31799 + echo hostName nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:11.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:12.111: INFO: rc: 1 Apr 29 22:01:12.111: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:12.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:13.099: INFO: rc: 1 Apr 29 22:01:13.099: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:13.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:14.130: INFO: rc: 1 Apr 29 22:01:14.130: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31799 + echo hostName nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:14.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:15.114: INFO: rc: 1 Apr 29 22:01:15.114: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:15.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:16.101: INFO: rc: 1 Apr 29 22:01:16.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:16.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:17.121: INFO: rc: 1 Apr 29 22:01:17.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:17.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:18.120: INFO: rc: 1 Apr 29 22:01:18.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:18.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:19.103: INFO: rc: 1 Apr 29 22:01:19.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:19.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:20.146: INFO: rc: 1 Apr 29 22:01:20.146: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:20.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:21.138: INFO: rc: 1 Apr 29 22:01:21.138: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:21.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:22.623: INFO: rc: 1 Apr 29 22:01:22.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:22.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:23.218: INFO: rc: 1 Apr 29 22:01:23.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:23.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:24.334: INFO: rc: 1 Apr 29 22:01:24.334: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:24.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:25.933: INFO: rc: 1 Apr 29 22:01:25.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:26.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:27.482: INFO: rc: 1 Apr 29 22:01:27.482: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:27.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:28.135: INFO: rc: 1 Apr 29 22:01:28.135: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:28.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:29.124: INFO: rc: 1 Apr 29 22:01:29.124: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:29.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:30.141: INFO: rc: 1 Apr 29 22:01:30.141: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:30.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:31.106: INFO: rc: 1 Apr 29 22:01:31.106: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:31.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:32.111: INFO: rc: 1 Apr 29 22:01:32.111: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:32.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:33.126: INFO: rc: 1 Apr 29 22:01:33.126: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:33.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:34.117: INFO: rc: 1 Apr 29 22:01:34.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:34.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:35.100: INFO: rc: 1 Apr 29 22:01:35.100: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:35.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:36.220: INFO: rc: 1 Apr 29 22:01:36.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:36.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:37.113: INFO: rc: 1 Apr 29 22:01:37.113: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:37.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:38.126: INFO: rc: 1 Apr 29 22:01:38.126: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:38.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:39.089: INFO: rc: 1 Apr 29 22:01:39.089: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:39.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:40.125: INFO: rc: 1 Apr 29 22:01:40.125: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:40.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:41.155: INFO: rc: 1 Apr 29 22:01:41.155: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:41.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:42.123: INFO: rc: 1 Apr 29 22:01:42.123: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:42.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:43.125: INFO: rc: 1 Apr 29 22:01:43.125: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:43.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:44.139: INFO: rc: 1 Apr 29 22:01:44.139: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:44.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:45.093: INFO: rc: 1 Apr 29 22:01:45.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:45.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:46.684: INFO: rc: 1 Apr 29 22:01:46.684: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:46.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:47.869: INFO: rc: 1 Apr 29 22:01:47.870: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:48.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:49.116: INFO: rc: 1 Apr 29 22:01:49.116: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:49.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:50.136: INFO: rc: 1 Apr 29 22:01:50.136: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:50.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:51.107: INFO: rc: 1 Apr 29 22:01:51.107: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:51.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:52.116: INFO: rc: 1 Apr 29 22:01:52.116: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:52.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:53.123: INFO: rc: 1 Apr 29 22:01:53.123: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:53.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:54.116: INFO: rc: 1 Apr 29 22:01:54.116: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:54.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:55.176: INFO: rc: 1 Apr 29 22:01:55.176: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo+ hostName nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:55.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:56.612: INFO: rc: 1 Apr 29 22:01:56.612: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:56.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:57.311: INFO: rc: 1 Apr 29 22:01:57.311: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:57.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:58.244: INFO: rc: 1 Apr 29 22:01:58.244: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:58.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:01:59.140: INFO: rc: 1 Apr 29 22:01:59.141: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:59.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:00.098: INFO: rc: 1 Apr 29 22:02:00.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:00.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:01.101: INFO: rc: 1 Apr 29 22:02:01.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:01.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:02.358: INFO: rc: 1 Apr 29 22:02:02.358: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:02.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:03.182: INFO: rc: 1 Apr 29 22:02:03.183: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:03.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:04.102: INFO: rc: 1 Apr 29 22:02:04.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:04.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:05.098: INFO: rc: 1 Apr 29 22:02:05.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:05.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:06.103: INFO: rc: 1 Apr 29 22:02:06.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:06.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:07.109: INFO: rc: 1 Apr 29 22:02:07.109: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:07.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:08.107: INFO: rc: 1 Apr 29 22:02:08.107: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:08.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:09.102: INFO: rc: 1 Apr 29 22:02:09.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:09.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:10.103: INFO: rc: 1 Apr 29 22:02:10.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:10.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:11.110: INFO: rc: 1 Apr 29 22:02:11.110: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:11.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:12.111: INFO: rc: 1 Apr 29 22:02:12.111: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:12.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:13.120: INFO: rc: 1 Apr 29 22:02:13.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:13.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:14.115: INFO: rc: 1 Apr 29 22:02:14.115: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:14.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:15.094: INFO: rc: 1 Apr 29 22:02:15.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:15.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:16.117: INFO: rc: 1 Apr 29 22:02:16.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:16.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:17.118: INFO: rc: 1 Apr 29 22:02:17.119: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:17.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:18.123: INFO: rc: 1 Apr 29 22:02:18.123: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31799 + echo hostName nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:18.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:19.094: INFO: rc: 1 Apr 29 22:02:19.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:19.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:20.189: INFO: rc: 1 Apr 29 22:02:20.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:20.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:21.100: INFO: rc: 1 Apr 29 22:02:21.100: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:21.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:22.511: INFO: rc: 1 Apr 29 22:02:22.511: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:22.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:23.100: INFO: rc: 1 Apr 29 22:02:23.100: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31799 + echo hostName nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:23.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:24.109: INFO: rc: 1 Apr 29 22:02:24.110: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:24.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:25.326: INFO: rc: 1 Apr 29 22:02:25.326: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:25.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:26.112: INFO: rc: 1 Apr 29 22:02:26.113: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:26.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:27.113: INFO: rc: 1 Apr 29 22:02:27.113: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:27.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:28.302: INFO: rc: 1 Apr 29 22:02:28.302: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:28.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:29.116: INFO: rc: 1 Apr 29 22:02:29.116: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:29.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:30.151: INFO: rc: 1 Apr 29 22:02:30.151: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:30.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:31.097: INFO: rc: 1 Apr 29 22:02:31.097: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:31.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:32.110: INFO: rc: 1 Apr 29 22:02:32.110: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:32.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:33.123: INFO: rc: 1 Apr 29 22:02:33.123: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:33.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:34.102: INFO: rc: 1 Apr 29 22:02:34.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:34.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:35.116: INFO: rc: 1 Apr 29 22:02:35.116: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:35.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:36.519: INFO: rc: 1 Apr 29 22:02:36.519: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:36.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:37.153: INFO: rc: 1 Apr 29 22:02:37.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:37.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:38.122: INFO: rc: 1 Apr 29 22:02:38.122: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:38.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:39.107: INFO: rc: 1 Apr 29 22:02:39.107: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:39.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:40.106: INFO: rc: 1 Apr 29 22:02:40.106: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:40.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:41.109: INFO: rc: 1 Apr 29 22:02:41.109: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:41.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:42.108: INFO: rc: 1 Apr 29 22:02:42.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:42.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:43.124: INFO: rc: 1 Apr 29 22:02:43.124: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:43.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:44.108: INFO: rc: 1 Apr 29 22:02:44.109: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:44.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:45.114: INFO: rc: 1 Apr 29 22:02:45.114: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:45.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:46.118: INFO: rc: 1 Apr 29 22:02:46.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:46.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:47.103: INFO: rc: 1 Apr 29 22:02:47.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:47.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:48.112: INFO: rc: 1 Apr 29 22:02:48.112: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31799 + echo hostName nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:48.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:49.187: INFO: rc: 1 Apr 29 22:02:49.187: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:49.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:50.102: INFO: rc: 1 Apr 29 22:02:50.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:50.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:51.109: INFO: rc: 1 Apr 29 22:02:51.109: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:51.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:52.075: INFO: rc: 1 Apr 29 22:02:52.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:52.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:53.117: INFO: rc: 1 Apr 29 22:02:53.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:53.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:54.117: INFO: rc: 1 Apr 29 22:02:54.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:54.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:55.278: INFO: rc: 1 Apr 29 22:02:55.279: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:55.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:56.120: INFO: rc: 1 Apr 29 22:02:56.120: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:56.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:57.167: INFO: rc: 1 Apr 29 22:02:57.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:57.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:58.117: INFO: rc: 1 Apr 29 22:02:58.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:58.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:02:59.104: INFO: rc: 1 Apr 29 22:02:59.104: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:59.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:03:00.113: INFO: rc: 1 Apr 29 22:03:00.113: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:00.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:03:01.102: INFO: rc: 1 Apr 29 22:03:01.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:01.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:03:02.091: INFO: rc: 1 Apr 29 22:03:02.091: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:02.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:03:03.118: INFO: rc: 1 Apr 29 22:03:03.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:03.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:03:04.153: INFO: rc: 1 Apr 29 22:03:04.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:04.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:03:05.166: INFO: rc: 1 Apr 29 22:03:05.166: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:05.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:03:06.167: INFO: rc: 1 Apr 29 22:03:06.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:06.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799' Apr 29 22:03:06.563: INFO: rc: 1 Apr 29 22:03:06.563: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4148 exec execpod-affinitywqdrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31799: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31799 nc: connect to 10.10.190.207 port 31799 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:06.564: FAIL: Unexpected error: <*errors.errorString | 0xc001e17130>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31799 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31799 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc00117a9a0, 0x77b33d8, 0xc004a1bb80, 0xc000e8a500, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2535 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001800a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001800a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001800a80, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Apr 29 22:03:06.565: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-4148, will wait for the garbage collector to delete the pods Apr 29 22:03:06.643: INFO: Deleting ReplicationController affinity-nodeport took: 5.1725ms Apr 29 22:03:06.743: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.778846ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-4148". STEP: Found 27 events. Apr 29 22:03:17.164: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-55866: { } Scheduled: Successfully assigned services-4148/affinity-nodeport-55866 to node2 Apr 29 22:03:17.164: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-ms24m: { } Scheduled: Successfully assigned services-4148/affinity-nodeport-ms24m to node2 Apr 29 22:03:17.164: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-zgzkd: { } Scheduled: Successfully assigned services-4148/affinity-nodeport-zgzkd to node2 Apr 29 22:03:17.164: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinitywqdrj: { } Scheduled: Successfully assigned services-4148/execpod-affinitywqdrj to node1 Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:50 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-zgzkd Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:50 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-ms24m Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:50 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-55866 Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:53 +0000 UTC - event for affinity-nodeport-zgzkd: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 284.358328ms Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:53 +0000 UTC - event for affinity-nodeport-zgzkd: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:54 +0000 UTC - event for affinity-nodeport-55866: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:54 +0000 UTC - event for affinity-nodeport-55866: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 585.866819ms Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:54 +0000 UTC - event for affinity-nodeport-55866: {kubelet node2} Created: Created container affinity-nodeport Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:54 +0000 UTC - event for affinity-nodeport-ms24m: {kubelet node2} Created: Created container affinity-nodeport Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:54 +0000 UTC - event for affinity-nodeport-ms24m: {kubelet node2} Started: Started container affinity-nodeport Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:54 +0000 UTC - event for affinity-nodeport-ms24m: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 315.299132ms Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:54 +0000 UTC - event for affinity-nodeport-ms24m: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:54 +0000 UTC - event for affinity-nodeport-zgzkd: {kubelet node2} Created: Created container affinity-nodeport Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:54 +0000 UTC - event for affinity-nodeport-zgzkd: {kubelet node2} Started: Started container affinity-nodeport Apr 29 22:03:17.164: INFO: At 2022-04-29 22:00:56 +0000 UTC - event for affinity-nodeport-55866: {kubelet node2} Started: Started container affinity-nodeport Apr 29 22:03:17.164: INFO: At 2022-04-29 22:01:01 +0000 UTC - event for execpod-affinitywqdrj: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:03:17.164: INFO: At 2022-04-29 22:01:01 +0000 UTC - event for execpod-affinitywqdrj: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 304.191626ms Apr 29 22:03:17.164: INFO: At 2022-04-29 22:01:02 +0000 UTC - event for execpod-affinitywqdrj: {kubelet node1} Started: Started container agnhost-container Apr 29 22:03:17.164: INFO: At 2022-04-29 22:01:02 +0000 UTC - event for execpod-affinitywqdrj: {kubelet node1} Created: Created container agnhost-container Apr 29 22:03:17.164: INFO: At 2022-04-29 22:03:06 +0000 UTC - event for affinity-nodeport-55866: {kubelet node2} Killing: Stopping container affinity-nodeport Apr 29 22:03:17.164: INFO: At 2022-04-29 22:03:06 +0000 UTC - event for affinity-nodeport-ms24m: {kubelet node2} Killing: Stopping container affinity-nodeport Apr 29 22:03:17.164: INFO: At 2022-04-29 22:03:06 +0000 UTC - event for affinity-nodeport-zgzkd: {kubelet node2} Killing: Stopping container affinity-nodeport Apr 29 22:03:17.164: INFO: At 2022-04-29 22:03:06 +0000 UTC - event for execpod-affinitywqdrj: {kubelet node1} Killing: Stopping container agnhost-container Apr 29 22:03:17.167: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:03:17.167: INFO: Apr 29 22:03:17.172: INFO: Logging node info for node master1 Apr 29 22:03:17.176: INFO: Node Info: &Node{ObjectMeta:{master1 c968c2e7-7594-4f6e-b85d-932008e8124f 41320 0 2022-04-29 19:57:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:57:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:05:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-04-29 20:08:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:14 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:14 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:14 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:03:14 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c3419fad4d2d4c5c9574e5b11ef92b4b,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:5e0f934f-c777-4827-ade6-efec15a825ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:03:17.177: INFO: Logging kubelet events for node master1 Apr 29 22:03:17.179: INFO: Logging pods the kubelet thinks is on node master1 Apr 29 22:03:17.212: INFO: kube-apiserver-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.212: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:03:17.212: INFO: kube-controller-manager-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.212: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 29 22:03:17.212: INFO: kube-scheduler-master1 started at 2022-04-29 20:16:35 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.212: INFO: Container kube-scheduler ready: true, restart count 1 Apr 29 22:03:17.212: INFO: kube-proxy-9s46x started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.212: INFO: Container kube-proxy ready: true, restart count 1 Apr 29 22:03:17.212: INFO: kube-flannel-cskzh started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:03:17.212: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:03:17.212: INFO: Container kube-flannel ready: true, restart count 1 Apr 29 22:03:17.212: INFO: kube-multus-ds-amd64-w54d6 started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.212: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:03:17.212: INFO: node-feature-discovery-controller-cff799f9f-zpv5m started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.212: INFO: Container nfd-controller ready: true, restart count 0 Apr 29 22:03:17.212: INFO: node-exporter-svkqv started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:17.212: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:03:17.212: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:03:17.212: INFO: coredns-8474476ff8-59qm6 started at 2022-04-29 20:00:39 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.212: INFO: Container coredns ready: true, restart count 1 Apr 29 22:03:17.212: INFO: container-registry-65d7c44b96-np5nk started at 2022-04-29 20:04:54 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:17.212: INFO: Container docker-registry ready: true, restart count 0 Apr 29 22:03:17.212: INFO: Container nginx ready: true, restart count 0 Apr 29 22:03:17.307: INFO: Latency metrics for node master1 Apr 29 22:03:17.307: INFO: Logging node info for node master2 Apr 29 22:03:17.310: INFO: Node Info: &Node{ObjectMeta:{master2 5b362581-f2d5-419c-a0b0-3aad7bec82f9 41366 0 2022-04-29 19:57:49 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:16 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:16 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:16 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:03:16 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d055250c7e194b8a9a572c232266a800,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fb9f32a4-f021-45dd-bddf-6f1d5ae9abae,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:03:17.311: INFO: Logging kubelet events for node master2 Apr 29 22:03:17.313: INFO: Logging pods the kubelet thinks is on node master2 Apr 29 22:03:17.328: INFO: prometheus-operator-585ccfb458-q8r6q started at 2022-04-29 20:13:20 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:17.328: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:03:17.328: INFO: Container prometheus-operator ready: true, restart count 0 Apr 29 22:03:17.328: INFO: node-exporter-9rgc2 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:17.328: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:03:17.328: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:03:17.328: INFO: kube-controller-manager-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.328: INFO: Container kube-controller-manager ready: true, restart count 1 Apr 29 22:03:17.328: INFO: kube-scheduler-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.328: INFO: Container kube-scheduler ready: true, restart count 3 Apr 29 22:03:17.328: INFO: dns-autoscaler-7df78bfcfb-csfp5 started at 2022-04-29 20:00:43 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.328: INFO: Container autoscaler ready: true, restart count 1 Apr 29 22:03:17.328: INFO: coredns-8474476ff8-bg2wr started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.328: INFO: Container coredns ready: true, restart count 2 Apr 29 22:03:17.328: INFO: kube-apiserver-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.328: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:03:17.328: INFO: kube-proxy-4dnjw started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.328: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:03:17.328: INFO: kube-flannel-q2wgv started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:03:17.328: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:03:17.328: INFO: Container kube-flannel ready: true, restart count 1 Apr 29 22:03:17.328: INFO: kube-multus-ds-amd64-txslv started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.328: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:03:17.428: INFO: Latency metrics for node master2 Apr 29 22:03:17.428: INFO: Logging node info for node master3 Apr 29 22:03:17.431: INFO: Node Info: &Node{ObjectMeta:{master3 1096e515-b559-4c90-b0f7-3398537b5f9e 41378 0 2022-04-29 19:58:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:58:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:16 +0000 UTC,LastTransitionTime:2022-04-29 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:17 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:17 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:17 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:03:17 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8955b376e6314525a9e533e277f5f4fb,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:6ffefaf4-8a5c-4288-a6a9-78ef35aa67ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:03:17.432: INFO: Logging kubelet events for node master3 Apr 29 22:03:17.434: INFO: Logging pods the kubelet thinks is on node master3 Apr 29 22:03:17.450: INFO: kube-scheduler-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.450: INFO: Container kube-scheduler ready: true, restart count 2 Apr 29 22:03:17.450: INFO: kube-proxy-gs7qh started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.450: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:03:17.450: INFO: kube-flannel-g8w9b started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:03:17.450: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:03:17.450: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:03:17.450: INFO: kube-multus-ds-amd64-lxrlj started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.450: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:03:17.450: INFO: node-exporter-gdq6v started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:17.450: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:03:17.450: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:03:17.450: INFO: kube-apiserver-master3 started at 2022-04-29 19:58:29 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.450: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:03:17.450: INFO: kube-controller-manager-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.450: INFO: Container kube-controller-manager ready: true, restart count 3 Apr 29 22:03:17.524: INFO: Latency metrics for node master3 Apr 29 22:03:17.524: INFO: Logging node info for node node1 Apr 29 22:03:17.526: INFO: Node Info: &Node{ObjectMeta:{node1 6842a10e-614a-46f0-b405-bc18936b0017 41203 0 2022-04-29 19:59:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 20:11:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:02:57 +0000 UTC,LastTransitionTime:2022-04-29 20:02:57 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:09 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:09 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:09 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:03:09 +0000 UTC,LastTransitionTime:2022-04-29 20:00:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2a0958eb1b3044f2963c9e5f2e902173,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fc6a2d14-7726-4aec-9428-6617632ddcbe,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:03:17.527: INFO: Logging kubelet events for node node1 Apr 29 22:03:17.530: INFO: Logging pods the kubelet thinks is on node node1 Apr 29 22:03:17.554: INFO: affinity-nodeport-transition-lwtf6 started at 2022-04-29 22:01:20 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Apr 29 22:03:17.554: INFO: node-feature-discovery-worker-kbl9s started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:03:17.554: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:03:17.554: INFO: cmk-init-discover-node1-gxlbt started at 2022-04-29 20:11:43 +0000 UTC (0+3 container statuses recorded) Apr 29 22:03:17.554: INFO: Container discover ready: false, restart count 0 Apr 29 22:03:17.554: INFO: Container init ready: false, restart count 0 Apr 29 22:03:17.554: INFO: Container install ready: false, restart count 0 Apr 29 22:03:17.554: INFO: prometheus-k8s-0 started at 2022-04-29 20:13:38 +0000 UTC (0+4 container statuses recorded) Apr 29 22:03:17.554: INFO: Container config-reloader ready: true, restart count 0 Apr 29 22:03:17.554: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 22:03:17.554: INFO: Container grafana ready: true, restart count 0 Apr 29 22:03:17.554: INFO: Container prometheus ready: true, restart count 1 Apr 29 22:03:17.554: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 started at 2022-04-29 20:16:34 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container tas-extender ready: true, restart count 0 Apr 29 22:03:17.554: INFO: kube-proxy-v9tgj started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:03:17.554: INFO: kubernetes-dashboard-785dcbb76d-d2k5n started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 22:03:17.554: INFO: node-exporter-c8777 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:17.554: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:03:17.554: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:03:17.554: INFO: nodeport-test-fwjcj started at 2022-04-29 22:01:54 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container nodeport-test ready: true, restart count 0 Apr 29 22:03:17.554: INFO: test-webserver-62d73a7b-0659-45f2-b7d4-1a9c6685ec9c started at 2022-04-29 21:59:15 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container test-webserver ready: true, restart count 0 Apr 29 22:03:17.554: INFO: kube-flannel-47phs started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Init container install-cni ready: true, restart count 2 Apr 29 22:03:17.554: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:03:17.554: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 22:03:17.554: INFO: collectd-ccgw2 started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded) Apr 29 22:03:17.554: INFO: Container collectd ready: true, restart count 0 Apr 29 22:03:17.554: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:03:17.554: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:03:17.554: INFO: nodeport-test-5t786 started at 2022-04-29 22:01:54 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container nodeport-test ready: true, restart count 0 Apr 29 22:03:17.554: INFO: foo-znjp4 started at 2022-04-29 22:03:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container c ready: true, restart count 0 Apr 29 22:03:17.554: INFO: execpodrplcj started at 2022-04-29 22:02:00 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container agnhost-container ready: true, restart count 0 Apr 29 22:03:17.554: INFO: nginx-proxy-node1 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:03:17.554: INFO: kube-multus-ds-amd64-kkz4q started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:03:17.554: INFO: cmk-f5znp started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:17.554: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:03:17.554: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:03:17.554: INFO: affinity-nodeport-transition-sw84z started at 2022-04-29 22:01:20 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.554: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Apr 29 22:03:17.774: INFO: Latency metrics for node node1 Apr 29 22:03:17.774: INFO: Logging node info for node node2 Apr 29 22:03:17.777: INFO: Node Info: &Node{ObjectMeta:{node2 2f399869-e81b-465d-97b4-806b6186d34a 41220 0 2022-04-29 19:59:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:12:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 20:12:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:12 +0000 UTC,LastTransitionTime:2022-04-29 20:03:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:10 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:10 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:10 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:03:10 +0000 UTC,LastTransitionTime:2022-04-29 20:03:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:22c763056cc24e6ba6e8bbadb5113d3d,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:8ca050bd-5d8a-4c59-8e02-41e26864aa92,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:03:17.779: INFO: Logging kubelet events for node node2 Apr 29 22:03:17.781: INFO: Logging pods the kubelet thinks is on node node2 Apr 29 22:03:17.795: INFO: kube-flannel-dbcj8 started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Init container install-cni ready: true, restart count 2 Apr 29 22:03:17.795: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 22:03:17.795: INFO: kube-multus-ds-amd64-7slcd started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:03:17.795: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:03:17.795: INFO: cmk-74bh9 started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:17.795: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:03:17.795: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:03:17.795: INFO: node-exporter-tlpmt started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:17.795: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:03:17.795: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:03:17.795: INFO: ss-1 started at 2022-04-29 22:03:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Container webserver ready: true, restart count 0 Apr 29 22:03:17.795: INFO: kube-proxy-k6tv2 started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:03:17.795: INFO: collectd-zxs8j started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded) Apr 29 22:03:17.795: INFO: Container collectd ready: true, restart count 0 Apr 29 22:03:17.795: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:03:17.795: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:03:17.795: INFO: ss2-1 started at 2022-04-29 22:02:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Container webserver ready: true, restart count 0 Apr 29 22:03:17.795: INFO: cmk-init-discover-node2-csdn7 started at 2022-04-29 20:12:03 +0000 UTC (0+3 container statuses recorded) Apr 29 22:03:17.795: INFO: Container discover ready: false, restart count 0 Apr 29 22:03:17.795: INFO: Container init ready: false, restart count 0 Apr 29 22:03:17.795: INFO: Container install ready: false, restart count 0 Apr 29 22:03:17.795: INFO: cmk-webhook-6c9d5f8578-b9mdv started at 2022-04-29 20:12:26 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 22:03:17.795: INFO: affinity-nodeport-transition-xkmjd started at 2022-04-29 22:01:20 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Apr 29 22:03:17.795: INFO: nginx-proxy-node2 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:03:17.795: INFO: ss2-0 started at 2022-04-29 22:02:55 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Container webserver ready: true, restart count 0 Apr 29 22:03:17.795: INFO: ss-0 started at 2022-04-29 22:02:43 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Container webserver ready: true, restart count 0 Apr 29 22:03:17.795: INFO: execpod-affinity7bkr4 started at 2022-04-29 22:01:29 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Container agnhost-container ready: true, restart count 0 Apr 29 22:03:17.795: INFO: ss-2 started at 2022-04-29 22:03:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Container webserver ready: true, restart count 0 Apr 29 22:03:17.795: INFO: foo-5lnms started at 2022-04-29 22:03:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Container c ready: true, restart count 0 Apr 29 22:03:17.795: INFO: pod-subpath-test-configmap-j425 started at (0+0 container statuses recorded) Apr 29 22:03:17.795: INFO: node-feature-discovery-worker-jtjjb started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:17.795: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:03:18.433: INFO: Latency metrics for node node2 Apr 29 22:03:18.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4148" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [147.695 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:03:06.564: Unexpected error: <*errors.errorString | 0xc001e17130>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31799 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31799 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:12.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:19.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2262" for this suite. • [SLOW TEST:7.036 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":30,"skipped":730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 21:59:15.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-62d73a7b-0659-45f2-b7d4-1a9c6685ec9c in namespace container-probe-2422 Apr 29 21:59:19.202: INFO: Started pod test-webserver-62d73a7b-0659-45f2-b7d4-1a9c6685ec9c in namespace container-probe-2422 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 21:59:19.205: INFO: Initial restart count of pod test-webserver-62d73a7b-0659-45f2-b7d4-1a9c6685ec9c is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:19.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2422" for this suite. • [SLOW TEST:244.559 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":163,"failed":0} SSS ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":21,"skipped":409,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:18.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 22:03:18.481: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1d903e0-a940-4fda-a23c-1957139d17d8" in namespace "projected-9675" to be "Succeeded or Failed" Apr 29 22:03:18.485: INFO: Pod "downwardapi-volume-d1d903e0-a940-4fda-a23c-1957139d17d8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.646128ms Apr 29 22:03:20.489: INFO: Pod "downwardapi-volume-d1d903e0-a940-4fda-a23c-1957139d17d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007657724s Apr 29 22:03:22.493: INFO: Pod "downwardapi-volume-d1d903e0-a940-4fda-a23c-1957139d17d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011657763s STEP: Saw pod success Apr 29 22:03:22.493: INFO: Pod "downwardapi-volume-d1d903e0-a940-4fda-a23c-1957139d17d8" satisfied condition "Succeeded or Failed" Apr 29 22:03:22.495: INFO: Trying to get logs from node node1 pod downwardapi-volume-d1d903e0-a940-4fda-a23c-1957139d17d8 container client-container: STEP: delete the pod Apr 29 22:03:22.510: INFO: Waiting for pod downwardapi-volume-d1d903e0-a940-4fda-a23c-1957139d17d8 to disappear Apr 29 22:03:22.512: INFO: Pod downwardapi-volume-d1d903e0-a940-4fda-a23c-1957139d17d8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:22.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9675" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":409,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:22.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:22.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4471" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":23,"skipped":431,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:19.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 29 22:03:19.457: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 29 22:03:19.463: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 29 22:03:19.463: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 29 22:03:19.475: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 29 22:03:19.475: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 29 22:03:19.488: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 29 22:03:19.488: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 29 22:03:26.537: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:26.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-2097" for this suite. • [SLOW TEST:7.124 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:22.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 29 22:03:22.693: INFO: Waiting up to 5m0s for pod "pod-a05df0f1-a3b4-4a94-ad4a-e8e7caa7294a" in namespace "emptydir-6509" to be "Succeeded or Failed" Apr 29 22:03:22.695: INFO: Pod "pod-a05df0f1-a3b4-4a94-ad4a-e8e7caa7294a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163051ms Apr 29 22:03:24.699: INFO: Pod "pod-a05df0f1-a3b4-4a94-ad4a-e8e7caa7294a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006235754s Apr 29 22:03:26.703: INFO: Pod "pod-a05df0f1-a3b4-4a94-ad4a-e8e7caa7294a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009621907s Apr 29 22:03:28.707: INFO: Pod "pod-a05df0f1-a3b4-4a94-ad4a-e8e7caa7294a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014126929s STEP: Saw pod success Apr 29 22:03:28.707: INFO: Pod "pod-a05df0f1-a3b4-4a94-ad4a-e8e7caa7294a" satisfied condition "Succeeded or Failed" Apr 29 22:03:28.710: INFO: Trying to get logs from node node2 pod pod-a05df0f1-a3b4-4a94-ad4a-e8e7caa7294a container test-container: STEP: delete the pod Apr 29 22:03:28.722: INFO: Waiting for pod pod-a05df0f1-a3b4-4a94-ad4a-e8e7caa7294a to disappear Apr 29 22:03:28.724: INFO: Pod pod-a05df0f1-a3b4-4a94-ad4a-e8e7caa7294a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:28.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6509" for this suite. • [SLOW TEST:6.069 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":458,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:02:43.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-2978 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-2978 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2978 Apr 29 22:02:43.864: INFO: Found 0 stateful pods, waiting for 1 Apr 29 22:02:53.868: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 29 22:02:53.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2978 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 22:02:54.123: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 29 22:02:54.123: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 22:02:54.123: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 22:02:54.125: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 29 22:03:04.130: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 29 22:03:04.130: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 22:03:04.141: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:03:04.141: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC }] Apr 29 22:03:04.141: INFO: Apr 29 22:03:04.141: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 29 22:03:05.146: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996588935s Apr 29 22:03:06.150: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991713978s Apr 29 22:03:07.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987925437s Apr 29 22:03:08.157: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.984426544s Apr 29 22:03:09.162: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.980876317s Apr 29 22:03:10.166: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.976885881s Apr 29 22:03:11.169: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.972867558s Apr 29 22:03:12.173: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.969726093s Apr 29 22:03:13.176: INFO: Verifying statefulset ss doesn't scale past 3 for another 965.764537ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2978 Apr 29 22:03:14.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2978 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 22:03:14.434: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 29 22:03:14.434: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 22:03:14.434: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 22:03:14.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2978 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 22:03:14.661: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Apr 29 22:03:14.661: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 22:03:14.661: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 22:03:14.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2978 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 22:03:15.130: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Apr 29 22:03:15.131: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 22:03:15.131: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 22:03:15.134: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 29 22:03:15.134: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 29 22:03:15.134: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 29 22:03:15.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2978 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 22:03:15.355: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 29 22:03:15.355: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 22:03:15.355: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 22:03:15.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2978 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 22:03:15.580: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 29 22:03:15.580: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 22:03:15.580: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 22:03:15.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2978 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 22:03:15.825: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 29 22:03:15.825: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 22:03:15.825: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 22:03:15.825: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 22:03:15.828: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 29 22:03:25.835: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 29 22:03:25.835: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 29 22:03:25.835: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 29 22:03:25.845: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:03:25.845: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC }] Apr 29 22:03:25.845: INFO: ss-1 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:25.845: INFO: ss-2 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:25.845: INFO: Apr 29 22:03:25.845: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 29 22:03:26.850: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:03:26.850: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC }] Apr 29 22:03:26.850: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:26.850: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:26.850: INFO: Apr 29 22:03:26.850: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 29 22:03:27.855: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:03:27.855: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC }] Apr 29 22:03:27.855: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:27.855: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:27.855: INFO: Apr 29 22:03:27.855: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 29 22:03:28.858: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:03:28.858: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC }] Apr 29 22:03:28.858: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:28.858: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:28.858: INFO: Apr 29 22:03:28.858: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 29 22:03:29.863: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:03:29.863: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC }] Apr 29 22:03:29.863: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:29.863: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:29.863: INFO: Apr 29 22:03:29.863: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 29 22:03:30.867: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:03:30.867: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC }] Apr 29 22:03:30.867: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:30.867: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:30.867: INFO: Apr 29 22:03:30.867: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 29 22:03:31.871: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:03:31.871: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC }] Apr 29 22:03:31.871: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:31.871: INFO: Apr 29 22:03:31.871: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 29 22:03:32.874: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:03:32.874: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC }] Apr 29 22:03:32.874: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:32.874: INFO: Apr 29 22:03:32.874: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 29 22:03:33.877: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:03:33.877: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC }] Apr 29 22:03:33.877: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:33.877: INFO: Apr 29 22:03:33.877: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 29 22:03:34.880: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:03:34.880: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:43 +0000 UTC }] Apr 29 22:03:34.880: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:03:04 +0000 UTC }] Apr 29 22:03:34.880: INFO: Apr 29 22:03:34.880: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2978 Apr 29 22:03:35.883: INFO: Scaling statefulset ss to 0 Apr 29 22:03:35.892: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Apr 29 22:03:35.894: INFO: Deleting all statefulset in ns statefulset-2978 Apr 29 22:03:35.897: INFO: Scaling statefulset ss to 0 Apr 29 22:03:35.904: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 22:03:35.906: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:35.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2978" for this suite. • [SLOW TEST:52.088 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":32,"skipped":442,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":159,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:00:24.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5625 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Apr 29 22:00:24.635: INFO: Found 0 stateful pods, waiting for 3 Apr 29 22:00:34.641: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 29 22:00:34.641: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 29 22:00:34.641: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 29 22:00:44.640: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 29 22:00:44.640: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 29 22:00:44.640: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 29 22:00:44.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5625 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 22:00:45.198: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 29 22:00:45.198: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 22:00:45.198: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Apr 29 22:00:55.229: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 29 22:01:05.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5625 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 22:01:05.484: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 29 22:01:05.484: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 22:01:05.484: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 22:01:15.498: INFO: Waiting for StatefulSet statefulset-5625/ss2 to complete update Apr 29 22:01:15.498: INFO: Waiting for Pod statefulset-5625/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 29 22:01:15.498: INFO: Waiting for Pod statefulset-5625/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 29 22:01:25.509: INFO: Waiting for StatefulSet statefulset-5625/ss2 to complete update Apr 29 22:01:25.509: INFO: Waiting for Pod statefulset-5625/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 29 22:01:25.509: INFO: Waiting for Pod statefulset-5625/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 29 22:01:35.504: INFO: Waiting for StatefulSet statefulset-5625/ss2 to complete update Apr 29 22:01:35.504: INFO: Waiting for Pod statefulset-5625/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 29 22:01:45.504: INFO: Waiting for StatefulSet statefulset-5625/ss2 to complete update Apr 29 22:01:45.504: INFO: Waiting for Pod statefulset-5625/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 29 22:01:55.504: INFO: Waiting for StatefulSet statefulset-5625/ss2 to complete update STEP: Rolling back to a previous revision Apr 29 22:02:05.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5625 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 22:02:05.739: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 29 22:02:05.739: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 22:02:05.739: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 22:02:15.770: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 29 22:02:25.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5625 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 22:02:26.068: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 29 22:02:26.069: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 22:02:26.069: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 22:02:56.087: INFO: Waiting for StatefulSet statefulset-5625/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Apr 29 22:03:06.093: INFO: Deleting all statefulset in ns statefulset-5625 Apr 29 22:03:06.095: INFO: Scaling statefulset ss2 to 0 Apr 29 22:03:36.109: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 22:03:36.111: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:36.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5625" for this suite. • [SLOW TEST:191.519 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":9,"skipped":159,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:36.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:36.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9444" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":10,"skipped":180,"failed":0} SSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":31,"skipped":785,"failed":0} [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:26.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Apr 29 22:03:26.578: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. Apr 29 22:03:26.994: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 29 22:03:29.023: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866606, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:03:31.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866606, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:03:33.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866606, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:03:35.028: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866606, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:03:37.028: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866607, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866606, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:03:40.449: INFO: Waited 1.413540057s for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Apr 29 22:03:40.850: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:41.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2801" for this suite. • [SLOW TEST:15.185 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":32,"skipped":785,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:15.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-j425 STEP: Creating a pod to test atomic-volume-subpath Apr 29 22:03:16.014: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-j425" in namespace "subpath-2479" to be "Succeeded or Failed" Apr 29 22:03:16.016: INFO: Pod "pod-subpath-test-configmap-j425": Phase="Pending", Reason="", readiness=false. Elapsed: 1.970519ms Apr 29 22:03:18.020: INFO: Pod "pod-subpath-test-configmap-j425": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005394543s Apr 29 22:03:20.025: INFO: Pod "pod-subpath-test-configmap-j425": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010222765s Apr 29 22:03:22.028: INFO: Pod "pod-subpath-test-configmap-j425": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013127657s Apr 29 22:03:24.032: INFO: Pod "pod-subpath-test-configmap-j425": Phase="Running", Reason="", readiness=true. Elapsed: 8.017811662s Apr 29 22:03:26.036: INFO: Pod "pod-subpath-test-configmap-j425": Phase="Running", Reason="", readiness=true. Elapsed: 10.021935047s Apr 29 22:03:28.041: INFO: Pod "pod-subpath-test-configmap-j425": Phase="Running", Reason="", readiness=true. Elapsed: 12.026322361s Apr 29 22:03:30.044: INFO: Pod "pod-subpath-test-configmap-j425": Phase="Running", Reason="", readiness=true. Elapsed: 14.030002324s Apr 29 22:03:32.048: INFO: Pod "pod-subpath-test-configmap-j425": Phase="Running", Reason="", readiness=true. Elapsed: 16.033884394s Apr 29 22:03:34.054: INFO: Pod "pod-subpath-test-configmap-j425": Phase="Running", Reason="", readiness=true. Elapsed: 18.03916067s Apr 29 22:03:36.058: INFO: Pod "pod-subpath-test-configmap-j425": Phase="Running", Reason="", readiness=true. Elapsed: 20.043057516s Apr 29 22:03:38.063: INFO: Pod "pod-subpath-test-configmap-j425": Phase="Running", Reason="", readiness=true. Elapsed: 22.048153061s Apr 29 22:03:40.067: INFO: Pod "pod-subpath-test-configmap-j425": Phase="Running", Reason="", readiness=true. Elapsed: 24.052634073s Apr 29 22:03:42.073: INFO: Pod "pod-subpath-test-configmap-j425": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.058191142s STEP: Saw pod success Apr 29 22:03:42.073: INFO: Pod "pod-subpath-test-configmap-j425" satisfied condition "Succeeded or Failed" Apr 29 22:03:42.075: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-j425 container test-container-subpath-configmap-j425: STEP: delete the pod Apr 29 22:03:42.088: INFO: Waiting for pod pod-subpath-test-configmap-j425 to disappear Apr 29 22:03:42.091: INFO: Pod pod-subpath-test-configmap-j425 no longer exists STEP: Deleting pod pod-subpath-test-configmap-j425 Apr 29 22:03:42.091: INFO: Deleting pod "pod-subpath-test-configmap-j425" in namespace "subpath-2479" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:42.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2479" for this suite. • [SLOW TEST:26.126 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":526,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:19.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Apr 29 22:03:19.760: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 29 22:03:19.760: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 29 22:03:19.763: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 29 22:03:19.763: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 29 22:03:19.769: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 29 22:03:19.769: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 29 22:03:19.794: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 29 22:03:19.794: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 29 22:03:22.935: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 and labels map[test-deployment-static:true] Apr 29 22:03:22.935: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 and labels map[test-deployment-static:true] Apr 29 22:03:27.138: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Apr 29 22:03:27.143: INFO: observed event type ADDED STEP: waiting for Replicas to scale Apr 29 22:03:27.145: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 Apr 29 22:03:27.145: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 Apr 29 22:03:27.145: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 Apr 29 22:03:27.145: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 Apr 29 22:03:27.145: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 Apr 29 22:03:27.145: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 Apr 29 22:03:27.145: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 Apr 29 22:03:27.145: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 0 Apr 29 22:03:27.145: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 Apr 29 22:03:27.145: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 Apr 29 22:03:27.145: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 Apr 29 22:03:27.145: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 Apr 29 22:03:27.145: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 Apr 29 22:03:27.145: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 Apr 29 22:03:27.148: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 Apr 29 22:03:27.148: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 Apr 29 22:03:27.154: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 Apr 29 22:03:27.154: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 Apr 29 22:03:27.161: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 Apr 29 22:03:27.161: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 Apr 29 22:03:27.171: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 Apr 29 22:03:27.171: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 Apr 29 22:03:32.140: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 Apr 29 22:03:32.140: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 Apr 29 22:03:32.152: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 STEP: listing Deployments Apr 29 22:03:32.156: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Apr 29 22:03:32.169: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Apr 29 22:03:32.176: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 29 22:03:32.176: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 29 22:03:32.184: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 29 22:03:32.193: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 29 22:03:32.199: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 29 22:03:37.721: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 29 22:03:38.126: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] Apr 29 22:03:38.139: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 29 22:03:38.147: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 29 22:03:45.181: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Apr 29 22:03:45.204: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 Apr 29 22:03:45.204: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 Apr 29 22:03:45.204: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 Apr 29 22:03:45.204: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 Apr 29 22:03:45.204: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 1 Apr 29 22:03:45.205: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 Apr 29 22:03:45.205: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 3 Apr 29 22:03:45.205: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 Apr 29 22:03:45.205: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 2 Apr 29 22:03:45.205: INFO: observed Deployment test-deployment in namespace deployment-1117 with ReadyReplicas 3 STEP: deleting the Deployment Apr 29 22:03:45.211: INFO: observed event type MODIFIED Apr 29 22:03:45.211: INFO: observed event type MODIFIED Apr 29 22:03:45.211: INFO: observed event type MODIFIED Apr 29 22:03:45.211: INFO: observed event type MODIFIED Apr 29 22:03:45.211: INFO: observed event type MODIFIED Apr 29 22:03:45.211: INFO: observed event type MODIFIED Apr 29 22:03:45.211: INFO: observed event type MODIFIED Apr 29 22:03:45.211: INFO: observed event type MODIFIED Apr 29 22:03:45.211: INFO: observed event type MODIFIED Apr 29 22:03:45.212: INFO: observed event type MODIFIED Apr 29 22:03:45.212: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Apr 29 22:03:45.215: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:45.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1117" for this suite. • [SLOW TEST:25.495 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":11,"skipped":166,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:45.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Apr 29 22:03:45.274: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:45.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8318" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":12,"skipped":173,"failed":0} [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:45.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:03:46.110: INFO: Checking APIGroup: apiregistration.k8s.io Apr 29 22:03:46.111: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Apr 29 22:03:46.111: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.111: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Apr 29 22:03:46.111: INFO: Checking APIGroup: apps Apr 29 22:03:46.111: INFO: PreferredVersion.GroupVersion: apps/v1 Apr 29 22:03:46.111: INFO: Versions found [{apps/v1 v1}] Apr 29 22:03:46.111: INFO: apps/v1 matches apps/v1 Apr 29 22:03:46.111: INFO: Checking APIGroup: events.k8s.io Apr 29 22:03:46.112: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Apr 29 22:03:46.112: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.112: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Apr 29 22:03:46.112: INFO: Checking APIGroup: authentication.k8s.io Apr 29 22:03:46.113: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Apr 29 22:03:46.113: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.113: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Apr 29 22:03:46.113: INFO: Checking APIGroup: authorization.k8s.io Apr 29 22:03:46.116: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Apr 29 22:03:46.116: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.116: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Apr 29 22:03:46.116: INFO: Checking APIGroup: autoscaling Apr 29 22:03:46.117: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Apr 29 22:03:46.117: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Apr 29 22:03:46.117: INFO: autoscaling/v1 matches autoscaling/v1 Apr 29 22:03:46.117: INFO: Checking APIGroup: batch Apr 29 22:03:46.118: INFO: PreferredVersion.GroupVersion: batch/v1 Apr 29 22:03:46.118: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Apr 29 22:03:46.118: INFO: batch/v1 matches batch/v1 Apr 29 22:03:46.118: INFO: Checking APIGroup: certificates.k8s.io Apr 29 22:03:46.119: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Apr 29 22:03:46.119: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.119: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Apr 29 22:03:46.119: INFO: Checking APIGroup: networking.k8s.io Apr 29 22:03:46.120: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Apr 29 22:03:46.120: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.120: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Apr 29 22:03:46.120: INFO: Checking APIGroup: extensions Apr 29 22:03:46.121: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Apr 29 22:03:46.121: INFO: Versions found [{extensions/v1beta1 v1beta1}] Apr 29 22:03:46.121: INFO: extensions/v1beta1 matches extensions/v1beta1 Apr 29 22:03:46.121: INFO: Checking APIGroup: policy Apr 29 22:03:46.122: INFO: PreferredVersion.GroupVersion: policy/v1 Apr 29 22:03:46.122: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Apr 29 22:03:46.122: INFO: policy/v1 matches policy/v1 Apr 29 22:03:46.122: INFO: Checking APIGroup: rbac.authorization.k8s.io Apr 29 22:03:46.123: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Apr 29 22:03:46.123: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.123: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Apr 29 22:03:46.123: INFO: Checking APIGroup: storage.k8s.io Apr 29 22:03:46.124: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Apr 29 22:03:46.124: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.124: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Apr 29 22:03:46.124: INFO: Checking APIGroup: admissionregistration.k8s.io Apr 29 22:03:46.125: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Apr 29 22:03:46.125: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.125: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Apr 29 22:03:46.125: INFO: Checking APIGroup: apiextensions.k8s.io Apr 29 22:03:46.125: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Apr 29 22:03:46.125: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.125: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Apr 29 22:03:46.125: INFO: Checking APIGroup: scheduling.k8s.io Apr 29 22:03:46.126: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Apr 29 22:03:46.126: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.126: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Apr 29 22:03:46.126: INFO: Checking APIGroup: coordination.k8s.io Apr 29 22:03:46.127: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Apr 29 22:03:46.127: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.127: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Apr 29 22:03:46.127: INFO: Checking APIGroup: node.k8s.io Apr 29 22:03:46.128: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Apr 29 22:03:46.128: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.128: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Apr 29 22:03:46.128: INFO: Checking APIGroup: discovery.k8s.io Apr 29 22:03:46.129: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Apr 29 22:03:46.129: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.129: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Apr 29 22:03:46.129: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Apr 29 22:03:46.130: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Apr 29 22:03:46.130: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.130: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Apr 29 22:03:46.130: INFO: Checking APIGroup: intel.com Apr 29 22:03:46.130: INFO: PreferredVersion.GroupVersion: intel.com/v1 Apr 29 22:03:46.130: INFO: Versions found [{intel.com/v1 v1}] Apr 29 22:03:46.130: INFO: intel.com/v1 matches intel.com/v1 Apr 29 22:03:46.130: INFO: Checking APIGroup: k8s.cni.cncf.io Apr 29 22:03:46.131: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 Apr 29 22:03:46.131: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] Apr 29 22:03:46.131: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 Apr 29 22:03:46.131: INFO: Checking APIGroup: monitoring.coreos.com Apr 29 22:03:46.132: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 Apr 29 22:03:46.132: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] Apr 29 22:03:46.132: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 Apr 29 22:03:46.132: INFO: Checking APIGroup: telemetry.intel.com Apr 29 22:03:46.133: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 Apr 29 22:03:46.133: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] Apr 29 22:03:46.133: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 Apr 29 22:03:46.133: INFO: Checking APIGroup: custom.metrics.k8s.io Apr 29 22:03:46.134: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 Apr 29 22:03:46.134: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] Apr 29 22:03:46.134: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:46.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-5803" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":13,"skipped":173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:20.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-9023 STEP: creating service affinity-nodeport-transition in namespace services-9023 STEP: creating replication controller affinity-nodeport-transition in namespace services-9023 I0429 22:01:20.520117 37 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-9023, replica count: 3 I0429 22:01:23.571082 37 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:01:26.572361 37 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:01:29.573816 37 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 22:01:29.585: INFO: Creating new exec pod Apr 29 22:01:34.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Apr 29 22:01:34.854: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Apr 29 22:01:34.854: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 29 22:01:34.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.44.185 80' Apr 29 22:01:35.105: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.44.185 80\nConnection to 10.233.44.185 80 port [tcp/http] succeeded!\n" Apr 29 22:01:35.105: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 29 22:01:35.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:35.345: INFO: rc: 1 Apr 29 22:01:35.345: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:36.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:36.595: INFO: rc: 1 Apr 29 22:01:36.595: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:37.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:37.588: INFO: rc: 1 Apr 29 22:01:37.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:38.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:38.665: INFO: rc: 1 Apr 29 22:01:38.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:39.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:39.571: INFO: rc: 1 Apr 29 22:01:39.571: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:40.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:40.599: INFO: rc: 1 Apr 29 22:01:40.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:41.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:41.588: INFO: rc: 1 Apr 29 22:01:41.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:42.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:42.590: INFO: rc: 1 Apr 29 22:01:42.590: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:43.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:43.571: INFO: rc: 1 Apr 29 22:01:43.571: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:44.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:44.605: INFO: rc: 1 Apr 29 22:01:44.605: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:45.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:45.692: INFO: rc: 1 Apr 29 22:01:45.693: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:46.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:46.591: INFO: rc: 1 Apr 29 22:01:46.591: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:47.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:47.607: INFO: rc: 1 Apr 29 22:01:47.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:48.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:48.842: INFO: rc: 1 Apr 29 22:01:48.842: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:49.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:49.713: INFO: rc: 1 Apr 29 22:01:49.713: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:50.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:51.015: INFO: rc: 1 Apr 29 22:01:51.015: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:51.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:51.641: INFO: rc: 1 Apr 29 22:01:51.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30913 + echo hostName nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:52.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:52.607: INFO: rc: 1 Apr 29 22:01:52.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:53.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:54.009: INFO: rc: 1 Apr 29 22:01:54.009: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:54.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:54.597: INFO: rc: 1 Apr 29 22:01:54.597: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:55.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:55.607: INFO: rc: 1 Apr 29 22:01:55.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:56.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:56.577: INFO: rc: 1 Apr 29 22:01:56.577: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:57.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:57.592: INFO: rc: 1 Apr 29 22:01:57.592: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:58.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:58.622: INFO: rc: 1 Apr 29 22:01:58.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:01:59.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:01:59.603: INFO: rc: 1 Apr 29 22:01:59.603: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:00.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:00.665: INFO: rc: 1 Apr 29 22:02:00.666: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:01.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:01.599: INFO: rc: 1 Apr 29 22:02:01.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:02.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:02.741: INFO: rc: 1 Apr 29 22:02:02.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:03.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:03.749: INFO: rc: 1 Apr 29 22:02:03.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:04.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:04.701: INFO: rc: 1 Apr 29 22:02:04.701: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:05.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:05.594: INFO: rc: 1 Apr 29 22:02:05.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:06.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:06.576: INFO: rc: 1 Apr 29 22:02:06.576: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:07.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:07.615: INFO: rc: 1 Apr 29 22:02:07.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:08.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:09.060: INFO: rc: 1 Apr 29 22:02:09.060: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:09.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:09.583: INFO: rc: 1 Apr 29 22:02:09.583: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:10.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:10.629: INFO: rc: 1 Apr 29 22:02:10.629: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:11.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:11.588: INFO: rc: 1 Apr 29 22:02:11.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:12.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:12.916: INFO: rc: 1 Apr 29 22:02:12.916: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:13.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:13.606: INFO: rc: 1 Apr 29 22:02:13.606: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:14.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:14.607: INFO: rc: 1 Apr 29 22:02:14.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:15.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:15.586: INFO: rc: 1 Apr 29 22:02:15.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:16.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:16.804: INFO: rc: 1 Apr 29 22:02:16.804: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:17.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:17.574: INFO: rc: 1 Apr 29 22:02:17.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:18.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:18.587: INFO: rc: 1 Apr 29 22:02:18.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:19.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:19.596: INFO: rc: 1 Apr 29 22:02:19.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:20.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:20.823: INFO: rc: 1 Apr 29 22:02:20.824: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:21.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:21.591: INFO: rc: 1 Apr 29 22:02:21.591: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:22.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:22.582: INFO: rc: 1 Apr 29 22:02:22.582: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30913 + echo hostName nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:23.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:23.582: INFO: rc: 1 Apr 29 22:02:23.582: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:24.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:24.581: INFO: rc: 1 Apr 29 22:02:24.581: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:25.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:26.242: INFO: rc: 1 Apr 29 22:02:26.242: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:26.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:26.670: INFO: rc: 1 Apr 29 22:02:26.670: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:27.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:27.646: INFO: rc: 1 Apr 29 22:02:27.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:28.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:28.597: INFO: rc: 1 Apr 29 22:02:28.597: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:29.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:29.576: INFO: rc: 1 Apr 29 22:02:29.576: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:30.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:30.785: INFO: rc: 1 Apr 29 22:02:30.785: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:31.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:31.575: INFO: rc: 1 Apr 29 22:02:31.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30913 + echo hostName nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:32.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:32.558: INFO: rc: 1 Apr 29 22:02:32.558: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:33.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:33.594: INFO: rc: 1 Apr 29 22:02:33.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:34.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:34.576: INFO: rc: 1 Apr 29 22:02:34.576: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:35.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:35.572: INFO: rc: 1 Apr 29 22:02:35.572: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:36.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:36.567: INFO: rc: 1 Apr 29 22:02:36.567: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:37.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:37.599: INFO: rc: 1 Apr 29 22:02:37.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:38.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:38.571: INFO: rc: 1 Apr 29 22:02:38.571: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:39.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:39.671: INFO: rc: 1 Apr 29 22:02:39.671: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:40.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:40.617: INFO: rc: 1 Apr 29 22:02:40.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:41.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:41.768: INFO: rc: 1 Apr 29 22:02:41.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:42.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:42.636: INFO: rc: 1 Apr 29 22:02:42.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:43.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:43.593: INFO: rc: 1 Apr 29 22:02:43.593: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:44.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:44.611: INFO: rc: 1 Apr 29 22:02:44.611: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:45.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:45.573: INFO: rc: 1 Apr 29 22:02:45.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:46.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:46.867: INFO: rc: 1 Apr 29 22:02:46.867: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:47.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:47.577: INFO: rc: 1 Apr 29 22:02:47.577: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:48.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:48.580: INFO: rc: 1 Apr 29 22:02:48.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:49.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:49.777: INFO: rc: 1 Apr 29 22:02:49.777: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:50.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:50.580: INFO: rc: 1 Apr 29 22:02:50.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:51.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:51.605: INFO: rc: 1 Apr 29 22:02:51.605: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:52.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:52.585: INFO: rc: 1 Apr 29 22:02:52.585: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:53.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:53.574: INFO: rc: 1 Apr 29 22:02:53.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:54.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:54.600: INFO: rc: 1 Apr 29 22:02:54.600: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:55.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:56.345: INFO: rc: 1 Apr 29 22:02:56.345: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:56.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:56.663: INFO: rc: 1 Apr 29 22:02:56.663: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:57.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:57.620: INFO: rc: 1 Apr 29 22:02:57.621: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:58.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:58.598: INFO: rc: 1 Apr 29 22:02:58.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:59.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:02:59.570: INFO: rc: 1 Apr 29 22:02:59.570: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:00.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:00.584: INFO: rc: 1 Apr 29 22:03:00.584: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:01.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:01.584: INFO: rc: 1 Apr 29 22:03:01.584: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:02.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:02.586: INFO: rc: 1 Apr 29 22:03:02.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:03.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:03.570: INFO: rc: 1 Apr 29 22:03:03.570: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:04.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:04.643: INFO: rc: 1 Apr 29 22:03:04.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:05.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:06.244: INFO: rc: 1 Apr 29 22:03:06.244: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:06.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:06.965: INFO: rc: 1 Apr 29 22:03:06.965: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:07.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:07.583: INFO: rc: 1 Apr 29 22:03:07.583: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:08.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:08.651: INFO: rc: 1 Apr 29 22:03:08.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:09.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:10.199: INFO: rc: 1 Apr 29 22:03:10.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:10.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:10.671: INFO: rc: 1 Apr 29 22:03:10.671: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:11.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:11.626: INFO: rc: 1 Apr 29 22:03:11.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:12.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:12.566: INFO: rc: 1 Apr 29 22:03:12.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:13.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:13.764: INFO: rc: 1 Apr 29 22:03:13.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:14.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:14.573: INFO: rc: 1 Apr 29 22:03:14.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:15.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:15.594: INFO: rc: 1 Apr 29 22:03:15.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:16.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:16.581: INFO: rc: 1 Apr 29 22:03:16.582: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:17.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:17.600: INFO: rc: 1 Apr 29 22:03:17.600: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:18.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:18.592: INFO: rc: 1 Apr 29 22:03:18.592: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:19.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:19.568: INFO: rc: 1 Apr 29 22:03:19.568: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:20.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:20.673: INFO: rc: 1 Apr 29 22:03:20.673: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:21.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:21.652: INFO: rc: 1 Apr 29 22:03:21.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:22.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:22.668: INFO: rc: 1 Apr 29 22:03:22.668: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:23.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:23.625: INFO: rc: 1 Apr 29 22:03:23.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:24.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:24.763: INFO: rc: 1 Apr 29 22:03:24.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:25.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:26.248: INFO: rc: 1 Apr 29 22:03:26.249: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:26.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:26.619: INFO: rc: 1 Apr 29 22:03:26.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:27.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:27.609: INFO: rc: 1 Apr 29 22:03:27.609: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:28.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:28.586: INFO: rc: 1 Apr 29 22:03:28.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:29.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:29.648: INFO: rc: 1 Apr 29 22:03:29.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:30.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:30.744: INFO: rc: 1 Apr 29 22:03:30.744: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:31.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:31.579: INFO: rc: 1 Apr 29 22:03:31.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:32.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:32.685: INFO: rc: 1 Apr 29 22:03:32.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:33.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:33.784: INFO: rc: 1 Apr 29 22:03:33.784: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:34.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:34.938: INFO: rc: 1 Apr 29 22:03:34.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:35.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:35.633: INFO: rc: 1 Apr 29 22:03:35.633: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30913 + echo hostName nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:35.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913' Apr 29 22:03:35.887: INFO: rc: 1 Apr 29 22:03:35.887: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpod-affinity7bkr4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30913: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30913 nc: connect to 10.10.190.207 port 30913 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:35.888: FAIL: Unexpected error: <*errors.errorString | 0xc004682b00>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30913 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30913 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000e5e160, 0x77b33d8, 0xc0006e2b00, 0xc000d88f00, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2531 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000478900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000478900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000478900, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Apr 29 22:03:35.889: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-9023, will wait for the garbage collector to delete the pods Apr 29 22:03:35.953: INFO: Deleting ReplicationController affinity-nodeport-transition took: 3.360971ms Apr 29 22:03:36.053: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.312147ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-9023". STEP: Found 27 events. Apr 29 22:03:45.269: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-lwtf6: { } Scheduled: Successfully assigned services-9023/affinity-nodeport-transition-lwtf6 to node1 Apr 29 22:03:45.269: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-sw84z: { } Scheduled: Successfully assigned services-9023/affinity-nodeport-transition-sw84z to node1 Apr 29 22:03:45.269: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-xkmjd: { } Scheduled: Successfully assigned services-9023/affinity-nodeport-transition-xkmjd to node2 Apr 29 22:03:45.269: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinity7bkr4: { } Scheduled: Successfully assigned services-9023/execpod-affinity7bkr4 to node2 Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:20 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-xkmjd Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:20 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-sw84z Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:20 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-lwtf6 Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:22 +0000 UTC - event for affinity-nodeport-transition-lwtf6: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 298.948118ms Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:22 +0000 UTC - event for affinity-nodeport-transition-lwtf6: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:22 +0000 UTC - event for affinity-nodeport-transition-xkmjd: {kubelet node2} Created: Created container affinity-nodeport-transition Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:22 +0000 UTC - event for affinity-nodeport-transition-xkmjd: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 275.406399ms Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:22 +0000 UTC - event for affinity-nodeport-transition-xkmjd: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:23 +0000 UTC - event for affinity-nodeport-transition-lwtf6: {kubelet node1} Created: Created container affinity-nodeport-transition Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:23 +0000 UTC - event for affinity-nodeport-transition-sw84z: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:23 +0000 UTC - event for affinity-nodeport-transition-sw84z: {kubelet node1} Created: Created container affinity-nodeport-transition Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:23 +0000 UTC - event for affinity-nodeport-transition-sw84z: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 287.477464ms Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:23 +0000 UTC - event for affinity-nodeport-transition-xkmjd: {kubelet node2} Started: Started container affinity-nodeport-transition Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:24 +0000 UTC - event for affinity-nodeport-transition-lwtf6: {kubelet node1} Started: Started container affinity-nodeport-transition Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:24 +0000 UTC - event for affinity-nodeport-transition-sw84z: {kubelet node1} Started: Started container affinity-nodeport-transition Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:31 +0000 UTC - event for execpod-affinity7bkr4: {kubelet node2} Started: Started container agnhost-container Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:31 +0000 UTC - event for execpod-affinity7bkr4: {kubelet node2} Created: Created container agnhost-container Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:31 +0000 UTC - event for execpod-affinity7bkr4: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:03:45.269: INFO: At 2022-04-29 22:01:31 +0000 UTC - event for execpod-affinity7bkr4: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 284.1812ms Apr 29 22:03:45.269: INFO: At 2022-04-29 22:03:35 +0000 UTC - event for affinity-nodeport-transition-lwtf6: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Apr 29 22:03:45.269: INFO: At 2022-04-29 22:03:35 +0000 UTC - event for affinity-nodeport-transition-sw84z: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Apr 29 22:03:45.269: INFO: At 2022-04-29 22:03:35 +0000 UTC - event for affinity-nodeport-transition-xkmjd: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Apr 29 22:03:45.269: INFO: At 2022-04-29 22:03:35 +0000 UTC - event for execpod-affinity7bkr4: {kubelet node2} Killing: Stopping container agnhost-container Apr 29 22:03:45.271: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:03:45.271: INFO: Apr 29 22:03:45.275: INFO: Logging node info for node master1 Apr 29 22:03:45.278: INFO: Node Info: &Node{ObjectMeta:{master1 c968c2e7-7594-4f6e-b85d-932008e8124f 42256 0 2022-04-29 19:57:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:57:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:05:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-04-29 20:08:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:44 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:44 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:44 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:03:44 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c3419fad4d2d4c5c9574e5b11ef92b4b,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:5e0f934f-c777-4827-ade6-efec15a825ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:03:45.278: INFO: Logging kubelet events for node master1 Apr 29 22:03:45.280: INFO: Logging pods the kubelet thinks is on node master1 Apr 29 22:03:45.302: INFO: coredns-8474476ff8-59qm6 started at 2022-04-29 20:00:39 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.302: INFO: Container coredns ready: true, restart count 1 Apr 29 22:03:45.302: INFO: container-registry-65d7c44b96-np5nk started at 2022-04-29 20:04:54 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:45.302: INFO: Container docker-registry ready: true, restart count 0 Apr 29 22:03:45.302: INFO: Container nginx ready: true, restart count 0 Apr 29 22:03:45.302: INFO: node-exporter-svkqv started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:45.302: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:03:45.302: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:03:45.302: INFO: kube-apiserver-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.302: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:03:45.302: INFO: kube-controller-manager-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.302: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 29 22:03:45.302: INFO: kube-scheduler-master1 started at 2022-04-29 20:16:35 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.302: INFO: Container kube-scheduler ready: true, restart count 1 Apr 29 22:03:45.302: INFO: kube-proxy-9s46x started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.302: INFO: Container kube-proxy ready: true, restart count 1 Apr 29 22:03:45.302: INFO: kube-flannel-cskzh started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:03:45.302: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:03:45.302: INFO: Container kube-flannel ready: true, restart count 1 Apr 29 22:03:45.302: INFO: kube-multus-ds-amd64-w54d6 started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.302: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:03:45.302: INFO: node-feature-discovery-controller-cff799f9f-zpv5m started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.302: INFO: Container nfd-controller ready: true, restart count 0 Apr 29 22:03:45.396: INFO: Latency metrics for node master1 Apr 29 22:03:45.396: INFO: Logging node info for node master2 Apr 29 22:03:45.398: INFO: Node Info: &Node{ObjectMeta:{master2 5b362581-f2d5-419c-a0b0-3aad7bec82f9 41984 0 2022-04-29 19:57:49 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:36 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:36 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:36 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:03:36 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d055250c7e194b8a9a572c232266a800,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fb9f32a4-f021-45dd-bddf-6f1d5ae9abae,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:03:45.399: INFO: Logging kubelet events for node master2 Apr 29 22:03:45.402: INFO: Logging pods the kubelet thinks is on node master2 Apr 29 22:03:45.412: INFO: kube-scheduler-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.412: INFO: Container kube-scheduler ready: true, restart count 3 Apr 29 22:03:45.412: INFO: dns-autoscaler-7df78bfcfb-csfp5 started at 2022-04-29 20:00:43 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.412: INFO: Container autoscaler ready: true, restart count 1 Apr 29 22:03:45.412: INFO: coredns-8474476ff8-bg2wr started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.412: INFO: Container coredns ready: true, restart count 2 Apr 29 22:03:45.412: INFO: prometheus-operator-585ccfb458-q8r6q started at 2022-04-29 20:13:20 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:45.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:03:45.412: INFO: Container prometheus-operator ready: true, restart count 0 Apr 29 22:03:45.412: INFO: node-exporter-9rgc2 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:45.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:03:45.412: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:03:45.412: INFO: kube-controller-manager-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.412: INFO: Container kube-controller-manager ready: true, restart count 1 Apr 29 22:03:45.412: INFO: kube-proxy-4dnjw started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.412: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:03:45.412: INFO: kube-flannel-q2wgv started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:03:45.412: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:03:45.412: INFO: Container kube-flannel ready: true, restart count 1 Apr 29 22:03:45.412: INFO: kube-multus-ds-amd64-txslv started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.412: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:03:45.412: INFO: kube-apiserver-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.412: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:03:45.508: INFO: Latency metrics for node master2 Apr 29 22:03:45.508: INFO: Logging node info for node master3 Apr 29 22:03:45.511: INFO: Node Info: &Node{ObjectMeta:{master3 1096e515-b559-4c90-b0f7-3398537b5f9e 41986 0 2022-04-29 19:58:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:58:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:16 +0000 UTC,LastTransitionTime:2022-04-29 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:37 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:37 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:37 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:03:37 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8955b376e6314525a9e533e277f5f4fb,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:6ffefaf4-8a5c-4288-a6a9-78ef35aa67ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:03:45.511: INFO: Logging kubelet events for node master3 Apr 29 22:03:45.513: INFO: Logging pods the kubelet thinks is on node master3 Apr 29 22:03:45.521: INFO: kube-controller-manager-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.521: INFO: Container kube-controller-manager ready: true, restart count 3 Apr 29 22:03:45.521: INFO: kube-scheduler-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.521: INFO: Container kube-scheduler ready: true, restart count 2 Apr 29 22:03:45.521: INFO: kube-proxy-gs7qh started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.521: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:03:45.521: INFO: kube-flannel-g8w9b started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:03:45.521: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:03:45.521: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:03:45.521: INFO: kube-multus-ds-amd64-lxrlj started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.521: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:03:45.521: INFO: node-exporter-gdq6v started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:45.521: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:03:45.521: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:03:45.521: INFO: kube-apiserver-master3 started at 2022-04-29 19:58:29 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.521: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:03:45.593: INFO: Latency metrics for node master3 Apr 29 22:03:45.593: INFO: Logging node info for node node1 Apr 29 22:03:45.595: INFO: Node Info: &Node{ObjectMeta:{node1 6842a10e-614a-46f0-b405-bc18936b0017 42026 0 2022-04-29 19:59:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 20:11:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:02:57 +0000 UTC,LastTransitionTime:2022-04-29 20:02:57 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:39 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:39 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:39 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:03:39 +0000 UTC,LastTransitionTime:2022-04-29 20:00:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2a0958eb1b3044f2963c9e5f2e902173,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fc6a2d14-7726-4aec-9428-6617632ddcbe,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:03:45.596: INFO: Logging kubelet events for node node1 Apr 29 22:03:45.598: INFO: Logging pods the kubelet thinks is on node node1 Apr 29 22:03:45.613: INFO: kubernetes-dashboard-785dcbb76d-d2k5n started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.613: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 22:03:45.613: INFO: node-exporter-c8777 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:45.613: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:03:45.613: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:03:45.613: INFO: nodeport-test-fwjcj started at 2022-04-29 22:01:54 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.613: INFO: Container nodeport-test ready: true, restart count 0 Apr 29 22:03:45.613: INFO: kube-proxy-v9tgj started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.613: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:03:45.613: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.613: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 22:03:45.613: INFO: collectd-ccgw2 started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded) Apr 29 22:03:45.613: INFO: Container collectd ready: true, restart count 0 Apr 29 22:03:45.613: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:03:45.613: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:03:45.613: INFO: nodeport-test-5t786 started at 2022-04-29 22:01:54 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.613: INFO: Container nodeport-test ready: true, restart count 0 Apr 29 22:03:45.613: INFO: foo-znjp4 started at 2022-04-29 22:03:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.613: INFO: Container c ready: true, restart count 0 Apr 29 22:03:45.613: INFO: kube-flannel-47phs started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:03:45.613: INFO: Init container install-cni ready: true, restart count 2 Apr 29 22:03:45.613: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:03:45.613: INFO: kube-multus-ds-amd64-kkz4q started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.613: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:03:45.613: INFO: cmk-f5znp started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:45.613: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:03:45.613: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:03:45.613: INFO: execpodrplcj started at 2022-04-29 22:02:00 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.613: INFO: Container agnhost-container ready: true, restart count 0 Apr 29 22:03:45.613: INFO: nginx-proxy-node1 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.613: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:03:45.613: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.613: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:03:45.613: INFO: cmk-init-discover-node1-gxlbt started at 2022-04-29 20:11:43 +0000 UTC (0+3 container statuses recorded) Apr 29 22:03:45.613: INFO: Container discover ready: false, restart count 0 Apr 29 22:03:45.613: INFO: Container init ready: false, restart count 0 Apr 29 22:03:45.613: INFO: Container install ready: false, restart count 0 Apr 29 22:03:45.613: INFO: prometheus-k8s-0 started at 2022-04-29 20:13:38 +0000 UTC (0+4 container statuses recorded) Apr 29 22:03:45.613: INFO: Container config-reloader ready: true, restart count 0 Apr 29 22:03:45.613: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 22:03:45.613: INFO: Container grafana ready: true, restart count 0 Apr 29 22:03:45.613: INFO: Container prometheus ready: true, restart count 1 Apr 29 22:03:45.613: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 started at 2022-04-29 20:16:34 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.613: INFO: Container tas-extender ready: true, restart count 0 Apr 29 22:03:45.613: INFO: node-feature-discovery-worker-kbl9s started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.613: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:03:45.878: INFO: Latency metrics for node node1 Apr 29 22:03:45.878: INFO: Logging node info for node node2 Apr 29 22:03:45.882: INFO: Node Info: &Node{ObjectMeta:{node2 2f399869-e81b-465d-97b4-806b6186d34a 42043 0 2022-04-29 19:59:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:12:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 20:12:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:12 +0000 UTC,LastTransitionTime:2022-04-29 20:03:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:40 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:40 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:03:40 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:03:40 +0000 UTC,LastTransitionTime:2022-04-29 20:03:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:22c763056cc24e6ba6e8bbadb5113d3d,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:8ca050bd-5d8a-4c59-8e02-41e26864aa92,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:03:45.884: INFO: Logging kubelet events for node node2 Apr 29 22:03:45.889: INFO: Logging pods the kubelet thinks is on node node2 Apr 29 22:03:45.910: INFO: nginx-proxy-node2 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.910: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:03:45.910: INFO: cmk-webhook-6c9d5f8578-b9mdv started at 2022-04-29 20:12:26 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.910: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 22:03:45.910: INFO: node-feature-discovery-worker-jtjjb started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.910: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:03:45.910: INFO: pod-subpath-test-secret-pxc6 started at 2022-04-29 22:03:42 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.910: INFO: Container test-container-subpath-secret-pxc6 ready: false, restart count 0 Apr 29 22:03:45.910: INFO: foo-5lnms started at 2022-04-29 22:03:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.910: INFO: Container c ready: true, restart count 0 Apr 29 22:03:45.910: INFO: kube-proxy-k6tv2 started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.910: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:03:45.910: INFO: kube-flannel-dbcj8 started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:03:45.910: INFO: Init container install-cni ready: true, restart count 2 Apr 29 22:03:45.910: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 22:03:45.910: INFO: kube-multus-ds-amd64-7slcd started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.910: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:03:45.910: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.910: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:03:45.910: INFO: cmk-74bh9 started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:45.910: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:03:45.910: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:03:45.910: INFO: node-exporter-tlpmt started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:03:45.910: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:03:45.910: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:03:45.910: INFO: pod-configmaps-6e44b6a5-f7e3-442e-8475-b35da3983dfa started at 2022-04-29 22:03:36 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.910: INFO: Container agnhost-container ready: true, restart count 0 Apr 29 22:03:45.910: INFO: cmk-init-discover-node2-csdn7 started at 2022-04-29 20:12:03 +0000 UTC (0+3 container statuses recorded) Apr 29 22:03:45.910: INFO: Container discover ready: false, restart count 0 Apr 29 22:03:45.910: INFO: Container init ready: false, restart count 0 Apr 29 22:03:45.910: INFO: Container install ready: false, restart count 0 Apr 29 22:03:45.910: INFO: collectd-zxs8j started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded) Apr 29 22:03:45.910: INFO: Container collectd ready: true, restart count 0 Apr 29 22:03:45.910: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:03:45.910: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:03:45.910: INFO: liveness-58ce6ea7-0047-4d50-ade0-469fbff47e37 started at 2022-04-29 22:03:28 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.910: INFO: Container agnhost-container ready: true, restart count 0 Apr 29 22:03:45.910: INFO: proxy-service-hrj5z-s8dnl started at 2022-04-29 22:03:36 +0000 UTC (0+1 container statuses recorded) Apr 29 22:03:45.910: INFO: Container proxy-service-hrj5z ready: true, restart count 0 Apr 29 22:03:45.910: INFO: pod-secrets-061018e6-9966-4a50-86d0-4c65150cbf65 started at 2022-04-29 22:03:41 +0000 UTC (0+3 container statuses recorded) Apr 29 22:03:45.910: INFO: Container creates-volume-test ready: false, restart count 0 Apr 29 22:03:45.910: INFO: Container dels-volume-test ready: false, restart count 0 Apr 29 22:03:45.910: INFO: Container upds-volume-test ready: false, restart count 0 Apr 29 22:03:46.340: INFO: Latency metrics for node node2 Apr 29 22:03:46.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9023" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [145.861 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:03:35.888: Unexpected error: <*errors.errorString | 0xc004682b00>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30913 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30913 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":19,"skipped":519,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:28.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-58ce6ea7-0047-4d50-ade0-469fbff47e37 in namespace container-probe-3294 Apr 29 22:03:34.820: INFO: Started pod liveness-58ce6ea7-0047-4d50-ade0-469fbff47e37 in namespace container-probe-3294 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 22:03:34.823: INFO: Initial restart count of pod liveness-58ce6ea7-0047-4d50-ade0-469fbff47e37 is 0 Apr 29 22:03:50.859: INFO: Restart count of pod container-probe-3294/liveness-58ce6ea7-0047-4d50-ade0-469fbff47e37 is now 1 (16.035677485s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:50.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3294" for this suite. • [SLOW TEST:22.101 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":481,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:46.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath Apr 29 22:03:46.422: INFO: Waiting up to 5m0s for pod "var-expansion-3867f1fa-f015-4f88-9c30-e928f78aa4ba" in namespace "var-expansion-9642" to be "Succeeded or Failed" Apr 29 22:03:46.424: INFO: Pod "var-expansion-3867f1fa-f015-4f88-9c30-e928f78aa4ba": Phase="Pending", Reason="", readiness=false. Elapsed: 1.881519ms Apr 29 22:03:48.427: INFO: Pod "var-expansion-3867f1fa-f015-4f88-9c30-e928f78aa4ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004595056s Apr 29 22:03:50.431: INFO: Pod "var-expansion-3867f1fa-f015-4f88-9c30-e928f78aa4ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008259836s Apr 29 22:03:52.434: INFO: Pod "var-expansion-3867f1fa-f015-4f88-9c30-e928f78aa4ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011145828s STEP: Saw pod success Apr 29 22:03:52.434: INFO: Pod "var-expansion-3867f1fa-f015-4f88-9c30-e928f78aa4ba" satisfied condition "Succeeded or Failed" Apr 29 22:03:52.436: INFO: Trying to get logs from node node2 pod var-expansion-3867f1fa-f015-4f88-9c30-e928f78aa4ba container dapi-container: STEP: delete the pod Apr 29 22:03:52.448: INFO: Waiting for pod var-expansion-3867f1fa-f015-4f88-9c30-e928f78aa4ba to disappear Apr 29 22:03:52.450: INFO: Pod var-expansion-3867f1fa-f015-4f88-9c30-e928f78aa4ba no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:52.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9642" for this suite. • [SLOW TEST:6.065 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":20,"skipped":543,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:46.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod Apr 29 22:03:46.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5680 create -f -' Apr 29 22:03:46.585: INFO: stderr: "" Apr 29 22:03:46.585: INFO: stdout: "pod/pause created\n" Apr 29 22:03:46.585: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 29 22:03:46.585: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5680" to be "running and ready" Apr 29 22:03:46.588: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.002931ms Apr 29 22:03:48.593: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007840029s Apr 29 22:03:50.596: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010893951s Apr 29 22:03:52.600: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.014634044s Apr 29 22:03:52.600: INFO: Pod "pause" satisfied condition "running and ready" Apr 29 22:03:52.600: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod Apr 29 22:03:52.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5680 label pods pause testing-label=testing-label-value' Apr 29 22:03:52.766: INFO: stderr: "" Apr 29 22:03:52.766: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 29 22:03:52.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5680 get pod pause -L testing-label' Apr 29 22:03:52.929: INFO: stderr: "" Apr 29 22:03:52.929: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 29 22:03:52.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5680 label pods pause testing-label-' Apr 29 22:03:53.125: INFO: stderr: "" Apr 29 22:03:53.125: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 29 22:03:53.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5680 get pod pause -L testing-label' Apr 29 22:03:53.286: INFO: stderr: "" Apr 29 22:03:53.286: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources Apr 29 22:03:53.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5680 delete --grace-period=0 --force -f -' Apr 29 22:03:53.410: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 22:03:53.410: INFO: stdout: "pod \"pause\" force deleted\n" Apr 29 22:03:53.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5680 get rc,svc -l name=pause --no-headers' Apr 29 22:03:53.610: INFO: stderr: "No resources found in kubectl-5680 namespace.\n" Apr 29 22:03:53.610: INFO: stdout: "" Apr 29 22:03:53.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5680 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 29 22:03:53.776: INFO: stderr: "" Apr 29 22:03:53.776: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:53.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5680" for this suite. • [SLOW TEST:7.602 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":14,"skipped":196,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:41.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-fec86b04-d0c8-4782-b685-06a52235183c STEP: Creating secret with name s-test-opt-upd-791086ef-717b-48ee-ab45-79e366e9da70 STEP: Creating the pod Apr 29 22:03:41.799: INFO: The status of Pod pod-secrets-061018e6-9966-4a50-86d0-4c65150cbf65 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:03:43.804: INFO: The status of Pod pod-secrets-061018e6-9966-4a50-86d0-4c65150cbf65 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:03:45.803: INFO: The status of Pod pod-secrets-061018e6-9966-4a50-86d0-4c65150cbf65 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:03:47.803: INFO: The status of Pod pod-secrets-061018e6-9966-4a50-86d0-4c65150cbf65 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:03:49.804: INFO: The status of Pod pod-secrets-061018e6-9966-4a50-86d0-4c65150cbf65 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-fec86b04-d0c8-4782-b685-06a52235183c STEP: Updating secret s-test-opt-upd-791086ef-717b-48ee-ab45-79e366e9da70 STEP: Creating secret with name s-test-opt-create-d4bdb783-25b2-4698-9c64-10a27781ffdd STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:53.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1273" for this suite. • [SLOW TEST:12.150 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":789,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:53.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image Apr 29 22:03:53.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1116 create -f -' Apr 29 22:03:54.304: INFO: stderr: "" Apr 29 22:03:54.304: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Apr 29 22:03:54.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1116 diff -f -' Apr 29 22:03:54.599: INFO: rc: 1 Apr 29 22:03:54.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1116 delete -f -' Apr 29 22:03:54.724: INFO: stderr: "" Apr 29 22:03:54.724: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:54.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1116" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":34,"skipped":805,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:50.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:55.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9145" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":541,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:08.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1718, will wait for the garbage collector to delete the pods Apr 29 22:03:14.449: INFO: Deleting Job.batch foo took: 4.750099ms Apr 29 22:03:14.550: INFO: Terminating Job.batch foo pods took: 100.896375ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:55.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1718" for this suite. • [SLOW TEST:46.801 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":7,"skipped":59,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:35.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-hrj5z in namespace proxy-4403 I0429 22:03:36.022003 26 runners.go:190] Created replication controller with name: proxy-service-hrj5z, namespace: proxy-4403, replica count: 1 I0429 22:03:37.073863 26 runners.go:190] proxy-service-hrj5z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:03:38.074683 26 runners.go:190] proxy-service-hrj5z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:03:39.074929 26 runners.go:190] proxy-service-hrj5z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:03:40.076229 26 runners.go:190] proxy-service-hrj5z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:03:41.077219 26 runners.go:190] proxy-service-hrj5z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:03:42.078642 26 runners.go:190] proxy-service-hrj5z Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 22:03:42.081: INFO: setup took 6.069917649s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 29 22:03:42.084: INFO: (0) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 2.961611ms) Apr 29 22:03:42.084: INFO: (0) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.939172ms) Apr 29 22:03:42.084: INFO: (0) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 3.052768ms) Apr 29 22:03:42.084: INFO: (0) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 3.14737ms) Apr 29 22:03:42.085: INFO: (0) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:1080/proxy/: test<... (200; 3.875276ms) Apr 29 22:03:42.085: INFO: (0) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 4.128327ms) Apr 29 22:03:42.085: INFO: (0) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 4.367808ms) Apr 29 22:03:42.085: INFO: (0) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 4.067869ms) Apr 29 22:03:42.085: INFO: (0) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 4.091167ms) Apr 29 22:03:42.087: INFO: (0) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 6.289485ms) Apr 29 22:03:42.087: INFO: (0) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 6.378387ms) Apr 29 22:03:42.088: INFO: (0) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 7.323728ms) Apr 29 22:03:42.088: INFO: (0) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname1/proxy/: tls baz (200; 7.36666ms) Apr 29 22:03:42.088: INFO: (0) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: ... (200; 2.269678ms) Apr 29 22:03:42.091: INFO: (1) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 2.474323ms) Apr 29 22:03:42.092: INFO: (1) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.767213ms) Apr 29 22:03:42.092: INFO: (1) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.68917ms) Apr 29 22:03:42.092: INFO: (1) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.731749ms) Apr 29 22:03:42.092: INFO: (1) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 2.865351ms) Apr 29 22:03:42.092: INFO: (1) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: test<... (200; 3.285074ms) Apr 29 22:03:42.092: INFO: (1) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 3.555739ms) Apr 29 22:03:42.092: INFO: (1) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 3.389734ms) Apr 29 22:03:42.092: INFO: (1) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 3.491048ms) Apr 29 22:03:42.092: INFO: (1) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 3.556291ms) Apr 29 22:03:42.093: INFO: (1) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 4.136631ms) Apr 29 22:03:42.093: INFO: (1) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 4.092688ms) Apr 29 22:03:42.096: INFO: (2) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:1080/proxy/: test<... (200; 2.358142ms) Apr 29 22:03:42.096: INFO: (2) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.323468ms) Apr 29 22:03:42.096: INFO: (2) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 2.808582ms) Apr 29 22:03:42.096: INFO: (2) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 2.927923ms) Apr 29 22:03:42.096: INFO: (2) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 2.913159ms) Apr 29 22:03:42.096: INFO: (2) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.736056ms) Apr 29 22:03:42.097: INFO: (2) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 3.2552ms) Apr 29 22:03:42.097: INFO: (2) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 3.2497ms) Apr 29 22:03:42.097: INFO: (2) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: test (200; 3.292993ms) Apr 29 22:03:42.097: INFO: (2) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 3.268849ms) Apr 29 22:03:42.097: INFO: (2) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 3.596326ms) Apr 29 22:03:42.097: INFO: (2) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 3.687828ms) Apr 29 22:03:42.097: INFO: (2) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname1/proxy/: tls baz (200; 3.735403ms) Apr 29 22:03:42.097: INFO: (2) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 4.140707ms) Apr 29 22:03:42.098: INFO: (2) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 4.368496ms) Apr 29 22:03:42.099: INFO: (3) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 1.862583ms) Apr 29 22:03:42.100: INFO: (3) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.801216ms) Apr 29 22:03:42.101: INFO: (3) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 2.777997ms) Apr 29 22:03:42.101: INFO: (3) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.876458ms) Apr 29 22:03:42.101: INFO: (3) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: ... (200; 3.098658ms) Apr 29 22:03:42.101: INFO: (3) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 3.152402ms) Apr 29 22:03:42.101: INFO: (3) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 3.296637ms) Apr 29 22:03:42.101: INFO: (3) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:1080/proxy/: test<... (200; 3.372047ms) Apr 29 22:03:42.101: INFO: (3) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 3.270353ms) Apr 29 22:03:42.101: INFO: (3) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 3.539556ms) Apr 29 22:03:42.102: INFO: (3) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname1/proxy/: tls baz (200; 3.85023ms) Apr 29 22:03:42.102: INFO: (3) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 3.80879ms) Apr 29 22:03:42.102: INFO: (3) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 4.058139ms) Apr 29 22:03:42.102: INFO: (3) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 4.188646ms) Apr 29 22:03:42.102: INFO: (3) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 4.496231ms) Apr 29 22:03:42.104: INFO: (4) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.143955ms) Apr 29 22:03:42.104: INFO: (4) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:1080/proxy/: test<... (200; 2.174282ms) Apr 29 22:03:42.104: INFO: (4) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.186999ms) Apr 29 22:03:42.105: INFO: (4) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: ... (200; 2.605274ms) Apr 29 22:03:42.105: INFO: (4) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 2.921388ms) Apr 29 22:03:42.105: INFO: (4) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 2.862735ms) Apr 29 22:03:42.105: INFO: (4) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.929467ms) Apr 29 22:03:42.105: INFO: (4) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname1/proxy/: tls baz (200; 3.10217ms) Apr 29 22:03:42.106: INFO: (4) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 3.59226ms) Apr 29 22:03:42.106: INFO: (4) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 3.70324ms) Apr 29 22:03:42.106: INFO: (4) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 3.864913ms) Apr 29 22:03:42.107: INFO: (4) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 4.094703ms) Apr 29 22:03:42.107: INFO: (4) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 4.018116ms) Apr 29 22:03:42.109: INFO: (5) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 2.282582ms) Apr 29 22:03:42.109: INFO: (5) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 2.219893ms) Apr 29 22:03:42.109: INFO: (5) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.288226ms) Apr 29 22:03:42.109: INFO: (5) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.520651ms) Apr 29 22:03:42.109: INFO: (5) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 2.813003ms) Apr 29 22:03:42.109: INFO: (5) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.737401ms) Apr 29 22:03:42.109: INFO: (5) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.591485ms) Apr 29 22:03:42.110: INFO: (5) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: test<... (200; 3.090254ms) Apr 29 22:03:42.110: INFO: (5) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 3.065958ms) Apr 29 22:03:42.110: INFO: (5) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 3.531506ms) Apr 29 22:03:42.110: INFO: (5) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 3.477836ms) Apr 29 22:03:42.110: INFO: (5) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 3.647381ms) Apr 29 22:03:42.111: INFO: (5) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 3.973511ms) Apr 29 22:03:42.111: INFO: (5) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 4.251525ms) Apr 29 22:03:42.111: INFO: (5) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname1/proxy/: tls baz (200; 4.232839ms) Apr 29 22:03:42.113: INFO: (6) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 1.798478ms) Apr 29 22:03:42.114: INFO: (6) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.354305ms) Apr 29 22:03:42.114: INFO: (6) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 2.339409ms) Apr 29 22:03:42.114: INFO: (6) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:1080/proxy/: test<... (200; 2.38205ms) Apr 29 22:03:42.114: INFO: (6) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 2.495446ms) Apr 29 22:03:42.114: INFO: (6) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 2.78627ms) Apr 29 22:03:42.114: INFO: (6) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.800961ms) Apr 29 22:03:42.114: INFO: (6) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 2.980982ms) Apr 29 22:03:42.114: INFO: (6) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: ... (200; 2.289878ms) Apr 29 22:03:42.118: INFO: (7) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.54657ms) Apr 29 22:03:42.118: INFO: (7) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:1080/proxy/: test<... (200; 2.860179ms) Apr 29 22:03:42.119: INFO: (7) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 3.104915ms) Apr 29 22:03:42.119: INFO: (7) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 3.199992ms) Apr 29 22:03:42.119: INFO: (7) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 3.227386ms) Apr 29 22:03:42.119: INFO: (7) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: test (200; 2.499975ms) Apr 29 22:03:42.123: INFO: (8) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:1080/proxy/: test<... (200; 2.984729ms) Apr 29 22:03:42.123: INFO: (8) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 3.0945ms) Apr 29 22:03:42.124: INFO: (8) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 3.378036ms) Apr 29 22:03:42.124: INFO: (8) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 3.570584ms) Apr 29 22:03:42.124: INFO: (8) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 3.455739ms) Apr 29 22:03:42.124: INFO: (8) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 3.4687ms) Apr 29 22:03:42.124: INFO: (8) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 3.790756ms) Apr 29 22:03:42.124: INFO: (8) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 3.696186ms) Apr 29 22:03:42.124: INFO: (8) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 3.757812ms) Apr 29 22:03:42.124: INFO: (8) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname1/proxy/: tls baz (200; 4.423774ms) Apr 29 22:03:42.124: INFO: (8) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 4.212664ms) Apr 29 22:03:42.125: INFO: (8) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 4.873358ms) Apr 29 22:03:42.127: INFO: (9) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 1.800868ms) Apr 29 22:03:42.127: INFO: (9) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.185518ms) Apr 29 22:03:42.127: INFO: (9) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.172734ms) Apr 29 22:03:42.127: INFO: (9) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.429981ms) Apr 29 22:03:42.128: INFO: (9) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 2.424413ms) Apr 29 22:03:42.128: INFO: (9) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 2.383827ms) Apr 29 22:03:42.128: INFO: (9) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: test<... (200; 2.556346ms) Apr 29 22:03:42.129: INFO: (9) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 3.378002ms) Apr 29 22:03:42.129: INFO: (9) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 3.5618ms) Apr 29 22:03:42.129: INFO: (9) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 3.383928ms) Apr 29 22:03:42.129: INFO: (9) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 3.526003ms) Apr 29 22:03:42.129: INFO: (9) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname1/proxy/: tls baz (200; 3.902645ms) Apr 29 22:03:42.129: INFO: (9) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 4.031021ms) Apr 29 22:03:42.129: INFO: (9) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 4.141219ms) Apr 29 22:03:42.129: INFO: (9) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 4.02812ms) Apr 29 22:03:42.132: INFO: (10) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 2.216527ms) Apr 29 22:03:42.132: INFO: (10) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 2.350429ms) Apr 29 22:03:42.132: INFO: (10) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 2.363684ms) Apr 29 22:03:42.132: INFO: (10) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 2.339082ms) Apr 29 22:03:42.132: INFO: (10) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:1080/proxy/: test<... (200; 2.847368ms) Apr 29 22:03:42.133: INFO: (10) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 3.092682ms) Apr 29 22:03:42.133: INFO: (10) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 3.027164ms) Apr 29 22:03:42.133: INFO: (10) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 3.133746ms) Apr 29 22:03:42.133: INFO: (10) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 3.175107ms) Apr 29 22:03:42.133: INFO: (10) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 3.259147ms) Apr 29 22:03:42.133: INFO: (10) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 3.557945ms) Apr 29 22:03:42.133: INFO: (10) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 3.53563ms) Apr 29 22:03:42.133: INFO: (10) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: test (200; 2.263956ms) Apr 29 22:03:42.136: INFO: (11) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.197647ms) Apr 29 22:03:42.136: INFO: (11) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: test<... (200; 2.458562ms) Apr 29 22:03:42.137: INFO: (11) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.646593ms) Apr 29 22:03:42.137: INFO: (11) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 2.792146ms) Apr 29 22:03:42.137: INFO: (11) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 3.059215ms) Apr 29 22:03:42.137: INFO: (11) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 3.031429ms) Apr 29 22:03:42.137: INFO: (11) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 3.261159ms) Apr 29 22:03:42.137: INFO: (11) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 3.1038ms) Apr 29 22:03:42.137: INFO: (11) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 3.130633ms) Apr 29 22:03:42.137: INFO: (11) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname1/proxy/: tls baz (200; 3.63255ms) Apr 29 22:03:42.138: INFO: (11) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 3.892002ms) Apr 29 22:03:42.138: INFO: (11) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 3.893163ms) Apr 29 22:03:42.138: INFO: (11) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 4.339286ms) Apr 29 22:03:42.140: INFO: (12) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 1.917053ms) Apr 29 22:03:42.141: INFO: (12) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.176688ms) Apr 29 22:03:42.141: INFO: (12) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 2.370952ms) Apr 29 22:03:42.141: INFO: (12) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: test<... (200; 2.773508ms) Apr 29 22:03:42.141: INFO: (12) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 2.869516ms) Apr 29 22:03:42.142: INFO: (12) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 3.045567ms) Apr 29 22:03:42.142: INFO: (12) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 3.106108ms) Apr 29 22:03:42.142: INFO: (12) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 3.266709ms) Apr 29 22:03:42.142: INFO: (12) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 3.359435ms) Apr 29 22:03:42.142: INFO: (12) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 3.2783ms) Apr 29 22:03:42.142: INFO: (12) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 3.341678ms) Apr 29 22:03:42.142: INFO: (12) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 3.681376ms) Apr 29 22:03:42.142: INFO: (12) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 3.664181ms) Apr 29 22:03:42.143: INFO: (12) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname1/proxy/: tls baz (200; 4.0069ms) Apr 29 22:03:42.143: INFO: (12) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 4.198881ms) Apr 29 22:03:42.146: INFO: (13) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 2.724038ms) Apr 29 22:03:42.146: INFO: (13) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: ... (200; 2.854523ms) Apr 29 22:03:42.146: INFO: (13) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.910676ms) Apr 29 22:03:42.146: INFO: (13) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:1080/proxy/: test<... (200; 2.964584ms) Apr 29 22:03:42.146: INFO: (13) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 3.039564ms) Apr 29 22:03:42.146: INFO: (13) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 3.261972ms) Apr 29 22:03:42.146: INFO: (13) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 3.180811ms) Apr 29 22:03:42.146: INFO: (13) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 3.281061ms) Apr 29 22:03:42.146: INFO: (13) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 3.556257ms) Apr 29 22:03:42.146: INFO: (13) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname1/proxy/: tls baz (200; 3.534869ms) Apr 29 22:03:42.147: INFO: (13) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 3.700774ms) Apr 29 22:03:42.147: INFO: (13) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 3.95014ms) Apr 29 22:03:42.147: INFO: (13) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 3.914253ms) Apr 29 22:03:42.147: INFO: (13) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 4.312077ms) Apr 29 22:03:42.150: INFO: (14) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.146084ms) Apr 29 22:03:42.150: INFO: (14) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: test<... (200; 2.674099ms) Apr 29 22:03:42.150: INFO: (14) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 2.698363ms) Apr 29 22:03:42.150: INFO: (14) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.772159ms) Apr 29 22:03:42.150: INFO: (14) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 2.943369ms) Apr 29 22:03:42.151: INFO: (14) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 3.136017ms) Apr 29 22:03:42.151: INFO: (14) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 3.191175ms) Apr 29 22:03:42.151: INFO: (14) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 3.228278ms) Apr 29 22:03:42.151: INFO: (14) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname1/proxy/: tls baz (200; 3.399783ms) Apr 29 22:03:42.151: INFO: (14) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 3.211672ms) Apr 29 22:03:42.151: INFO: (14) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 3.36562ms) Apr 29 22:03:42.151: INFO: (14) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 3.724344ms) Apr 29 22:03:42.151: INFO: (14) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 3.824061ms) Apr 29 22:03:42.152: INFO: (14) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 4.035282ms) Apr 29 22:03:42.154: INFO: (15) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.218844ms) Apr 29 22:03:42.154: INFO: (15) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.553249ms) Apr 29 22:03:42.154: INFO: (15) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 2.423184ms) Apr 29 22:03:42.155: INFO: (15) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 2.863778ms) Apr 29 22:03:42.155: INFO: (15) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.928645ms) Apr 29 22:03:42.155: INFO: (15) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.91193ms) Apr 29 22:03:42.155: INFO: (15) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: test (200; 3.122921ms) Apr 29 22:03:42.155: INFO: (15) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:1080/proxy/: test<... (200; 3.217903ms) Apr 29 22:03:42.155: INFO: (15) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 3.314873ms) Apr 29 22:03:42.155: INFO: (15) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 3.430686ms) Apr 29 22:03:42.155: INFO: (15) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 3.631061ms) Apr 29 22:03:42.155: INFO: (15) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 3.634228ms) Apr 29 22:03:42.156: INFO: (15) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname1/proxy/: tls baz (200; 3.871267ms) Apr 29 22:03:42.156: INFO: (15) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 4.327615ms) Apr 29 22:03:42.158: INFO: (16) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.126398ms) Apr 29 22:03:42.158: INFO: (16) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 2.140212ms) Apr 29 22:03:42.158: INFO: (16) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:1080/proxy/: test<... (200; 2.20655ms) Apr 29 22:03:42.159: INFO: (16) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 2.57717ms) Apr 29 22:03:42.159: INFO: (16) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 2.565588ms) Apr 29 22:03:42.159: INFO: (16) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 2.614278ms) Apr 29 22:03:42.159: INFO: (16) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 2.782959ms) Apr 29 22:03:42.159: INFO: (16) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.752157ms) Apr 29 22:03:42.159: INFO: (16) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: test<... (200; 2.136993ms) Apr 29 22:03:42.163: INFO: (17) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 2.106586ms) Apr 29 22:03:42.163: INFO: (17) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.732ms) Apr 29 22:03:42.163: INFO: (17) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 2.74919ms) Apr 29 22:03:42.163: INFO: (17) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.792115ms) Apr 29 22:03:42.164: INFO: (17) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 3.295267ms) Apr 29 22:03:42.164: INFO: (17) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 3.245773ms) Apr 29 22:03:42.164: INFO: (17) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 3.203412ms) Apr 29 22:03:42.164: INFO: (17) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 3.323127ms) Apr 29 22:03:42.164: INFO: (17) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 3.244953ms) Apr 29 22:03:42.164: INFO: (17) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: test (200; 1.84752ms) Apr 29 22:03:42.168: INFO: (18) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.464882ms) Apr 29 22:03:42.168: INFO: (18) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: test<... (200; 2.333033ms) Apr 29 22:03:42.168: INFO: (18) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 2.577683ms) Apr 29 22:03:42.168: INFO: (18) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.6925ms) Apr 29 22:03:42.168: INFO: (18) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.642627ms) Apr 29 22:03:42.169: INFO: (18) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 3.388072ms) Apr 29 22:03:42.169: INFO: (18) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname1/proxy/: foo (200; 3.448036ms) Apr 29 22:03:42.169: INFO: (18) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 3.393205ms) Apr 29 22:03:42.169: INFO: (18) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 3.354413ms) Apr 29 22:03:42.169: INFO: (18) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname1/proxy/: tls baz (200; 3.596787ms) Apr 29 22:03:42.169: INFO: (18) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname1/proxy/: foo (200; 3.669389ms) Apr 29 22:03:42.169: INFO: (18) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 3.638061ms) Apr 29 22:03:42.169: INFO: (18) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname2/proxy/: tls qux (200; 3.875029ms) Apr 29 22:03:42.170: INFO: (18) /api/v1/namespaces/proxy-4403/services/proxy-service-hrj5z:portname2/proxy/: bar (200; 4.04502ms) Apr 29 22:03:42.172: INFO: (19) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 1.932732ms) Apr 29 22:03:42.172: INFO: (19) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl/proxy/: test (200; 2.170264ms) Apr 29 22:03:42.172: INFO: (19) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:162/proxy/: bar (200; 2.316246ms) Apr 29 22:03:42.172: INFO: (19) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 2.545431ms) Apr 29 22:03:42.173: INFO: (19) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:462/proxy/: tls qux (200; 2.80289ms) Apr 29 22:03:42.173: INFO: (19) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:1080/proxy/: test<... (200; 2.780213ms) Apr 29 22:03:42.173: INFO: (19) /api/v1/namespaces/proxy-4403/pods/http:proxy-service-hrj5z-s8dnl:1080/proxy/: ... (200; 2.774928ms) Apr 29 22:03:42.173: INFO: (19) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:460/proxy/: tls baz (200; 2.811973ms) Apr 29 22:03:42.173: INFO: (19) /api/v1/namespaces/proxy-4403/services/http:proxy-service-hrj5z:portname2/proxy/: bar (200; 3.113775ms) Apr 29 22:03:42.173: INFO: (19) /api/v1/namespaces/proxy-4403/pods/proxy-service-hrj5z-s8dnl:160/proxy/: foo (200; 3.054829ms) Apr 29 22:03:42.173: INFO: (19) /api/v1/namespaces/proxy-4403/services/https:proxy-service-hrj5z:tlsportname1/proxy/: tls baz (200; 3.378732ms) Apr 29 22:03:42.173: INFO: (19) /api/v1/namespaces/proxy-4403/pods/https:proxy-service-hrj5z-s8dnl:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Apr 29 22:03:54.760: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 29 22:03:59.765: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:59.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4859" for this suite. • [SLOW TEST:5.048 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":35,"skipped":806,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:53.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-da92355e-d5d3-44f8-a6e3-d64ee26e1963 STEP: Creating a pod to test consume secrets Apr 29 22:03:53.847: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-387a0580-e6bb-4e5f-b177-4990ed0fdef1" in namespace "projected-3311" to be "Succeeded or Failed" Apr 29 22:03:53.849: INFO: Pod "pod-projected-secrets-387a0580-e6bb-4e5f-b177-4990ed0fdef1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.429913ms Apr 29 22:03:55.852: INFO: Pod "pod-projected-secrets-387a0580-e6bb-4e5f-b177-4990ed0fdef1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005171883s Apr 29 22:03:57.855: INFO: Pod "pod-projected-secrets-387a0580-e6bb-4e5f-b177-4990ed0fdef1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008058456s Apr 29 22:03:59.861: INFO: Pod "pod-projected-secrets-387a0580-e6bb-4e5f-b177-4990ed0fdef1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014409022s STEP: Saw pod success Apr 29 22:03:59.861: INFO: Pod "pod-projected-secrets-387a0580-e6bb-4e5f-b177-4990ed0fdef1" satisfied condition "Succeeded or Failed" Apr 29 22:03:59.865: INFO: Trying to get logs from node node2 pod pod-projected-secrets-387a0580-e6bb-4e5f-b177-4990ed0fdef1 container projected-secret-volume-test: STEP: delete the pod Apr 29 22:03:59.878: INFO: Waiting for pod pod-projected-secrets-387a0580-e6bb-4e5f-b177-4990ed0fdef1 to disappear Apr 29 22:03:59.880: INFO: Pod pod-projected-secrets-387a0580-e6bb-4e5f-b177-4990ed0fdef1 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:03:59.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3311" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":208,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:55.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 29 22:03:55.098: INFO: Waiting up to 5m0s for pod "pod-70e80dbc-7c58-407d-b2cc-005283879a8d" in namespace "emptydir-7225" to be "Succeeded or Failed" Apr 29 22:03:55.101: INFO: Pod "pod-70e80dbc-7c58-407d-b2cc-005283879a8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392326ms Apr 29 22:03:57.104: INFO: Pod "pod-70e80dbc-7c58-407d-b2cc-005283879a8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005385824s Apr 29 22:03:59.115: INFO: Pod "pod-70e80dbc-7c58-407d-b2cc-005283879a8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017194429s Apr 29 22:04:01.119: INFO: Pod "pod-70e80dbc-7c58-407d-b2cc-005283879a8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020522167s Apr 29 22:04:03.124: INFO: Pod "pod-70e80dbc-7c58-407d-b2cc-005283879a8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.025649347s STEP: Saw pod success Apr 29 22:04:03.124: INFO: Pod "pod-70e80dbc-7c58-407d-b2cc-005283879a8d" satisfied condition "Succeeded or Failed" Apr 29 22:04:03.126: INFO: Trying to get logs from node node2 pod pod-70e80dbc-7c58-407d-b2cc-005283879a8d container test-container: STEP: delete the pod Apr 29 22:04:03.139: INFO: Waiting for pod pod-70e80dbc-7c58-407d-b2cc-005283879a8d to disappear Apr 29 22:04:03.141: INFO: Pod pod-70e80dbc-7c58-407d-b2cc-005283879a8d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:03.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7225" for this suite. • [SLOW TEST:8.080 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":558,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:59.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 22:04:00.005: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 22:04:02.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866640, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866640, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866640, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866640, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:04:04.020: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866640, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866640, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866640, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866640, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:04:06.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866640, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866640, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866640, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866640, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:04:09.029: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:09.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2707" for this suite. STEP: Destroying namespace "webhook-2707-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.303 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":36,"skipped":816,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:59.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:03:59.940: INFO: The status of Pod busybox-scheduling-ac6e2d47-b7e6-4eef-a5af-b4aa525bb89e is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:01.945: INFO: The status of Pod busybox-scheduling-ac6e2d47-b7e6-4eef-a5af-b4aa525bb89e is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:03.944: INFO: The status of Pod busybox-scheduling-ac6e2d47-b7e6-4eef-a5af-b4aa525bb89e is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:05.943: INFO: The status of Pod busybox-scheduling-ac6e2d47-b7e6-4eef-a5af-b4aa525bb89e is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:07.944: INFO: The status of Pod busybox-scheduling-ac6e2d47-b7e6-4eef-a5af-b4aa525bb89e is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:09.945: INFO: The status of Pod busybox-scheduling-ac6e2d47-b7e6-4eef-a5af-b4aa525bb89e is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:09.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1444" for this suite. • [SLOW TEST:10.060 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":213,"failed":0} SSSSS ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":37,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:54.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-1285 STEP: creating replication controller nodeport-test in namespace services-1285 I0429 22:01:54.180132 28 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-1285, replica count: 2 I0429 22:01:57.231000 28 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:02:00.232211 28 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 22:02:00.232: INFO: Creating new exec pod Apr 29 22:02:05.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Apr 29 22:02:05.499: INFO: stderr: "+ nc -v -t -w 2 nodeport-test 80\n+ echo hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Apr 29 22:02:05.499: INFO: stdout: "" Apr 29 22:02:06.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Apr 29 22:02:06.769: INFO: stderr: "+ nc -v -t -w 2 nodeport-test 80\n+ echo hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Apr 29 22:02:06.770: INFO: stdout: "" Apr 29 22:02:07.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Apr 29 22:02:07.773: INFO: stderr: "+ nc -v -t -w 2 nodeport-test 80\n+ echo hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Apr 29 22:02:07.773: INFO: stdout: "" Apr 29 22:02:08.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Apr 29 22:02:08.748: INFO: stderr: "+ nc -v -t -w 2 nodeport-test 80\n+ echo hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Apr 29 22:02:08.748: INFO: stdout: "nodeport-test-fwjcj" Apr 29 22:02:08.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.13.47 80' Apr 29 22:02:08.985: INFO: stderr: "+ nc -v -t -w 2 10.233.13.47 80\nConnection to 10.233.13.47 80 port [tcp/http] succeeded!\n+ echo hostName\n" Apr 29 22:02:08.985: INFO: stdout: "nodeport-test-5t786" Apr 29 22:02:08.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:09.240: INFO: rc: 1 Apr 29 22:02:09.240: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:10.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:10.472: INFO: rc: 1 Apr 29 22:02:10.472: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:11.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:11.496: INFO: rc: 1 Apr 29 22:02:11.496: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:12.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:12.477: INFO: rc: 1 Apr 29 22:02:12.477: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:13.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:13.492: INFO: rc: 1 Apr 29 22:02:13.492: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:14.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:14.482: INFO: rc: 1 Apr 29 22:02:14.482: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:15.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:15.494: INFO: rc: 1 Apr 29 22:02:15.494: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:16.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:16.489: INFO: rc: 1 Apr 29 22:02:16.489: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:17.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:17.476: INFO: rc: 1 Apr 29 22:02:17.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:18.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:18.492: INFO: rc: 1 Apr 29 22:02:18.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:19.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:19.488: INFO: rc: 1 Apr 29 22:02:19.488: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:20.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:20.956: INFO: rc: 1 Apr 29 22:02:20.956: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:21.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:21.626: INFO: rc: 1 Apr 29 22:02:21.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:22.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:22.559: INFO: rc: 1 Apr 29 22:02:22.559: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:23.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:23.527: INFO: rc: 1 Apr 29 22:02:23.527: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:24.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:24.467: INFO: rc: 1 Apr 29 22:02:24.467: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:25.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:25.580: INFO: rc: 1 Apr 29 22:02:25.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:26.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:26.481: INFO: rc: 1 Apr 29 22:02:26.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:27.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:27.545: INFO: rc: 1 Apr 29 22:02:27.545: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:28.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:28.489: INFO: rc: 1 Apr 29 22:02:28.489: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:29.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:29.503: INFO: rc: 1 Apr 29 22:02:29.503: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:30.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:30.474: INFO: rc: 1 Apr 29 22:02:30.474: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:31.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:31.498: INFO: rc: 1 Apr 29 22:02:31.498: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:32.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:32.467: INFO: rc: 1 Apr 29 22:02:32.467: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:33.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:33.492: INFO: rc: 1 Apr 29 22:02:33.492: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:34.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:34.483: INFO: rc: 1 Apr 29 22:02:34.483: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:35.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:35.493: INFO: rc: 1 Apr 29 22:02:35.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:36.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:36.683: INFO: rc: 1 Apr 29 22:02:36.684: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:37.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:37.491: INFO: rc: 1 Apr 29 22:02:37.491: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:38.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:38.512: INFO: rc: 1 Apr 29 22:02:38.512: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:39.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:39.479: INFO: rc: 1 Apr 29 22:02:39.479: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:40.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:40.474: INFO: rc: 1 Apr 29 22:02:40.474: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:41.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:41.493: INFO: rc: 1 Apr 29 22:02:41.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:42.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:42.477: INFO: rc: 1 Apr 29 22:02:42.477: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:43.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:43.500: INFO: rc: 1 Apr 29 22:02:43.500: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:44.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:44.479: INFO: rc: 1 Apr 29 22:02:44.479: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:45.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:45.477: INFO: rc: 1 Apr 29 22:02:45.477: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:46.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:46.503: INFO: rc: 1 Apr 29 22:02:46.503: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:47.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:47.481: INFO: rc: 1 Apr 29 22:02:47.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:48.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:48.471: INFO: rc: 1 Apr 29 22:02:48.471: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:49.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:49.639: INFO: rc: 1 Apr 29 22:02:49.639: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:50.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:50.467: INFO: rc: 1 Apr 29 22:02:50.467: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:51.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:51.565: INFO: rc: 1 Apr 29 22:02:51.565: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:52.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:53.231: INFO: rc: 1 Apr 29 22:02:53.231: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:53.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:53.675: INFO: rc: 1 Apr 29 22:02:53.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:54.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:54.538: INFO: rc: 1 Apr 29 22:02:54.538: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:55.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:55.697: INFO: rc: 1 Apr 29 22:02:55.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:56.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:56.563: INFO: rc: 1 Apr 29 22:02:56.563: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:57.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:57.497: INFO: rc: 1 Apr 29 22:02:57.497: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:58.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:58.513: INFO: rc: 1 Apr 29 22:02:58.513: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:02:59.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:02:59.497: INFO: rc: 1 Apr 29 22:02:59.497: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:00.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:00.466: INFO: rc: 1 Apr 29 22:03:00.466: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:01.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:01.482: INFO: rc: 1 Apr 29 22:03:01.482: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:02.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:02.496: INFO: rc: 1 Apr 29 22:03:02.496: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:03.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:03.510: INFO: rc: 1 Apr 29 22:03:03.510: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:04.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:04.494: INFO: rc: 1 Apr 29 22:03:04.494: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:05.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:05.721: INFO: rc: 1 Apr 29 22:03:05.721: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:06.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:06.554: INFO: rc: 1 Apr 29 22:03:06.554: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:07.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:07.594: INFO: rc: 1 Apr 29 22:03:07.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:08.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:08.532: INFO: rc: 1 Apr 29 22:03:08.532: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:09.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:09.574: INFO: rc: 1 Apr 29 22:03:09.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:10.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:10.502: INFO: rc: 1 Apr 29 22:03:10.502: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:11.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:11.569: INFO: rc: 1 Apr 29 22:03:11.570: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:12.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:12.481: INFO: rc: 1 Apr 29 22:03:12.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:13.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:13.523: INFO: rc: 1 Apr 29 22:03:13.523: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:14.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:14.486: INFO: rc: 1 Apr 29 22:03:14.486: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:15.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:15.475: INFO: rc: 1 Apr 29 22:03:15.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:16.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:16.458: INFO: rc: 1 Apr 29 22:03:16.458: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:17.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:17.497: INFO: rc: 1 Apr 29 22:03:17.497: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:18.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:18.487: INFO: rc: 1 Apr 29 22:03:18.487: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:19.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:19.495: INFO: rc: 1 Apr 29 22:03:19.495: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:20.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:20.617: INFO: rc: 1 Apr 29 22:03:20.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:21.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:21.707: INFO: rc: 1 Apr 29 22:03:21.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:22.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:22.471: INFO: rc: 1 Apr 29 22:03:22.471: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:23.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:23.507: INFO: rc: 1 Apr 29 22:03:23.507: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:24.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:24.489: INFO: rc: 1 Apr 29 22:03:24.489: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:25.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:26.198: INFO: rc: 1 Apr 29 22:03:26.198: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:26.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:26.501: INFO: rc: 1 Apr 29 22:03:26.501: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:27.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:27.501: INFO: rc: 1 Apr 29 22:03:27.501: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:28.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:28.595: INFO: rc: 1 Apr 29 22:03:28.595: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:29.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:29.486: INFO: rc: 1 Apr 29 22:03:29.486: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:30.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:30.477: INFO: rc: 1 Apr 29 22:03:30.477: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:31.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:31.486: INFO: rc: 1 Apr 29 22:03:31.487: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:32.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:32.512: INFO: rc: 1 Apr 29 22:03:32.512: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:33.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:33.493: INFO: rc: 1 Apr 29 22:03:33.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:34.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:34.496: INFO: rc: 1 Apr 29 22:03:34.496: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:35.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:35.491: INFO: rc: 1 Apr 29 22:03:35.491: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:36.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:36.475: INFO: rc: 1 Apr 29 22:03:36.475: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:37.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:37.529: INFO: rc: 1 Apr 29 22:03:37.529: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:38.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:38.808: INFO: rc: 1 Apr 29 22:03:38.808: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:39.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:39.505: INFO: rc: 1 Apr 29 22:03:39.505: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:40.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:40.492: INFO: rc: 1 Apr 29 22:03:40.492: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:41.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:41.553: INFO: rc: 1 Apr 29 22:03:41.553: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:42.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:42.490: INFO: rc: 1 Apr 29 22:03:42.490: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:43.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:43.469: INFO: rc: 1 Apr 29 22:03:43.469: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:44.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:44.495: INFO: rc: 1 Apr 29 22:03:44.496: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:45.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:45.493: INFO: rc: 1 Apr 29 22:03:45.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:46.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:46.472: INFO: rc: 1 Apr 29 22:03:46.473: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:47.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:47.492: INFO: rc: 1 Apr 29 22:03:47.492: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:48.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:48.475: INFO: rc: 1 Apr 29 22:03:48.475: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:49.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:49.502: INFO: rc: 1 Apr 29 22:03:49.502: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:50.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:50.500: INFO: rc: 1 Apr 29 22:03:50.500: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:51.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:51.510: INFO: rc: 1 Apr 29 22:03:51.511: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:52.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:52.482: INFO: rc: 1 Apr 29 22:03:52.483: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:53.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:53.456: INFO: rc: 1 Apr 29 22:03:53.456: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:54.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:54.475: INFO: rc: 1 Apr 29 22:03:54.475: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:55.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:55.592: INFO: rc: 1 Apr 29 22:03:55.592: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:56.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:56.656: INFO: rc: 1 Apr 29 22:03:56.656: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:57.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:57.928: INFO: rc: 1 Apr 29 22:03:57.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:58.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:58.505: INFO: rc: 1 Apr 29 22:03:58.505: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:03:59.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:03:59.509: INFO: rc: 1 Apr 29 22:03:59.509: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:00.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:04:00.711: INFO: rc: 1 Apr 29 22:04:00.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:01.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:04:01.566: INFO: rc: 1 Apr 29 22:04:01.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:02.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:04:02.902: INFO: rc: 1 Apr 29 22:04:02.902: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:03.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:04:04.283: INFO: rc: 1 Apr 29 22:04:04.283: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:05.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:04:05.514: INFO: rc: 1 Apr 29 22:04:05.514: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:06.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:04:06.488: INFO: rc: 1 Apr 29 22:04:06.488: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:07.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:04:07.508: INFO: rc: 1 Apr 29 22:04:07.508: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:08.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:04:08.483: INFO: rc: 1 Apr 29 22:04:08.483: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:09.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:04:09.495: INFO: rc: 1 Apr 29 22:04:09.495: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:09.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543' Apr 29 22:04:09.748: INFO: rc: 1 Apr 29 22:04:09.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1285 exec execpodrplcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31543: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31543 nc: connect to 10.10.190.207 port 31543 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:09.748: FAIL: Unexpected error: <*errors.errorString | 0xc00414cb40>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31543 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31543 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00036d680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00036d680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00036d680, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-1285". STEP: Found 17 events. Apr 29 22:04:09.765: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodrplcj: { } Scheduled: Successfully assigned services-1285/execpodrplcj to node1 Apr 29 22:04:09.765: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-5t786: { } Scheduled: Successfully assigned services-1285/nodeport-test-5t786 to node1 Apr 29 22:04:09.765: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-fwjcj: { } Scheduled: Successfully assigned services-1285/nodeport-test-fwjcj to node1 Apr 29 22:04:09.765: INFO: At 2022-04-29 22:01:54 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-5t786 Apr 29 22:04:09.765: INFO: At 2022-04-29 22:01:54 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-fwjcj Apr 29 22:04:09.765: INFO: At 2022-04-29 22:01:55 +0000 UTC - event for nodeport-test-5t786: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:04:09.765: INFO: At 2022-04-29 22:01:56 +0000 UTC - event for nodeport-test-5t786: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 334.247306ms Apr 29 22:04:09.765: INFO: At 2022-04-29 22:01:56 +0000 UTC - event for nodeport-test-fwjcj: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:04:09.766: INFO: At 2022-04-29 22:01:56 +0000 UTC - event for nodeport-test-fwjcj: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 300.084252ms Apr 29 22:04:09.766: INFO: At 2022-04-29 22:01:57 +0000 UTC - event for nodeport-test-5t786: {kubelet node1} Started: Started container nodeport-test Apr 29 22:04:09.766: INFO: At 2022-04-29 22:01:57 +0000 UTC - event for nodeport-test-5t786: {kubelet node1} Created: Created container nodeport-test Apr 29 22:04:09.766: INFO: At 2022-04-29 22:01:57 +0000 UTC - event for nodeport-test-fwjcj: {kubelet node1} Started: Started container nodeport-test Apr 29 22:04:09.766: INFO: At 2022-04-29 22:01:57 +0000 UTC - event for nodeport-test-fwjcj: {kubelet node1} Created: Created container nodeport-test Apr 29 22:04:09.766: INFO: At 2022-04-29 22:02:01 +0000 UTC - event for execpodrplcj: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:04:09.766: INFO: At 2022-04-29 22:02:02 +0000 UTC - event for execpodrplcj: {kubelet node1} Started: Started container agnhost-container Apr 29 22:04:09.766: INFO: At 2022-04-29 22:02:02 +0000 UTC - event for execpodrplcj: {kubelet node1} Created: Created container agnhost-container Apr 29 22:04:09.766: INFO: At 2022-04-29 22:02:02 +0000 UTC - event for execpodrplcj: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 339.256506ms Apr 29 22:04:09.768: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:04:09.769: INFO: execpodrplcj node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:02:00 +0000 UTC }] Apr 29 22:04:09.769: INFO: nodeport-test-5t786 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:54 +0000 UTC }] Apr 29 22:04:09.769: INFO: nodeport-test-fwjcj node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:01:54 +0000 UTC }] Apr 29 22:04:09.769: INFO: Apr 29 22:04:09.774: INFO: Logging node info for node master1 Apr 29 22:04:09.776: INFO: Node Info: &Node{ObjectMeta:{master1 c968c2e7-7594-4f6e-b85d-932008e8124f 42978 0 2022-04-29 19:57:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:57:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:05:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-04-29 20:08:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:04 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:04 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:04 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:04:04 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c3419fad4d2d4c5c9574e5b11ef92b4b,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:5e0f934f-c777-4827-ade6-efec15a825ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:04:09.777: INFO: Logging kubelet events for node master1 Apr 29 22:04:09.779: INFO: Logging pods the kubelet thinks is on node master1 Apr 29 22:04:09.802: INFO: coredns-8474476ff8-59qm6 started at 2022-04-29 20:00:39 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:09.802: INFO: Container coredns ready: true, restart count 1 Apr 29 22:04:09.802: INFO: container-registry-65d7c44b96-np5nk started at 2022-04-29 20:04:54 +0000 UTC (0+2 container statuses recorded) Apr 29 22:04:09.802: INFO: Container docker-registry ready: true, restart count 0 Apr 29 22:04:09.802: INFO: Container nginx ready: true, restart count 0 Apr 29 22:04:09.802: INFO: kube-scheduler-master1 started at 2022-04-29 20:16:35 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:09.802: INFO: Container kube-scheduler ready: true, restart count 1 Apr 29 22:04:09.802: INFO: kube-proxy-9s46x started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:09.802: INFO: Container kube-proxy ready: true, restart count 1 Apr 29 22:04:09.802: INFO: kube-flannel-cskzh started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:04:09.802: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:04:09.802: INFO: Container kube-flannel ready: true, restart count 1 Apr 29 22:04:09.802: INFO: kube-multus-ds-amd64-w54d6 started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:09.802: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:04:09.802: INFO: node-feature-discovery-controller-cff799f9f-zpv5m started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:09.802: INFO: Container nfd-controller ready: true, restart count 0 Apr 29 22:04:09.802: INFO: node-exporter-svkqv started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:04:09.802: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:04:09.802: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:04:09.802: INFO: kube-apiserver-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:09.802: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:04:09.802: INFO: kube-controller-manager-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:09.802: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 29 22:04:09.898: INFO: Latency metrics for node master1 Apr 29 22:04:09.898: INFO: Logging node info for node master2 Apr 29 22:04:09.900: INFO: Node Info: &Node{ObjectMeta:{master2 5b362581-f2d5-419c-a0b0-3aad7bec82f9 43018 0 2022-04-29 19:57:49 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:07 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:07 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:07 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:04:07 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d055250c7e194b8a9a572c232266a800,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fb9f32a4-f021-45dd-bddf-6f1d5ae9abae,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:04:09.901: INFO: Logging kubelet events for node master2 Apr 29 22:04:09.904: INFO: Logging pods the kubelet thinks is on node master2 Apr 29 22:04:09.913: INFO: kube-controller-manager-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:09.914: INFO: Container kube-controller-manager ready: true, restart count 1 Apr 29 22:04:09.914: INFO: kube-scheduler-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:09.914: INFO: Container kube-scheduler ready: true, restart count 3 Apr 29 22:04:09.914: INFO: dns-autoscaler-7df78bfcfb-csfp5 started at 2022-04-29 20:00:43 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:09.914: INFO: Container autoscaler ready: true, restart count 1 Apr 29 22:04:09.914: INFO: coredns-8474476ff8-bg2wr started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:09.914: INFO: Container coredns ready: true, restart count 2 Apr 29 22:04:09.914: INFO: prometheus-operator-585ccfb458-q8r6q started at 2022-04-29 20:13:20 +0000 UTC (0+2 container statuses recorded) Apr 29 22:04:09.914: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:04:09.914: INFO: Container prometheus-operator ready: true, restart count 0 Apr 29 22:04:09.914: INFO: node-exporter-9rgc2 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:04:09.914: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:04:09.914: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:04:09.914: INFO: kube-apiserver-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:09.914: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:04:09.914: INFO: kube-proxy-4dnjw started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:09.914: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:04:09.914: INFO: kube-flannel-q2wgv started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:04:09.914: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:04:09.914: INFO: Container kube-flannel ready: true, restart count 1 Apr 29 22:04:09.914: INFO: kube-multus-ds-amd64-txslv started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:09.914: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:04:10.006: INFO: Latency metrics for node master2 Apr 29 22:04:10.006: INFO: Logging node info for node master3 Apr 29 22:04:10.008: INFO: Node Info: &Node{ObjectMeta:{master3 1096e515-b559-4c90-b0f7-3398537b5f9e 43021 0 2022-04-29 19:58:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:58:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:16 +0000 UTC,LastTransitionTime:2022-04-29 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:07 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:07 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:07 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:04:07 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8955b376e6314525a9e533e277f5f4fb,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:6ffefaf4-8a5c-4288-a6a9-78ef35aa67ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:04:10.009: INFO: Logging kubelet events for node master3 Apr 29 22:04:10.015: INFO: Logging pods the kubelet thinks is on node master3 Apr 29 22:04:10.030: INFO: kube-apiserver-master3 started at 2022-04-29 19:58:29 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.030: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:04:10.030: INFO: kube-controller-manager-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.030: INFO: Container kube-controller-manager ready: true, restart count 3 Apr 29 22:04:10.030: INFO: kube-scheduler-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.030: INFO: Container kube-scheduler ready: true, restart count 2 Apr 29 22:04:10.030: INFO: kube-proxy-gs7qh started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.030: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:04:10.030: INFO: kube-flannel-g8w9b started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:04:10.030: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:04:10.030: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:04:10.030: INFO: kube-multus-ds-amd64-lxrlj started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.030: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:04:10.030: INFO: node-exporter-gdq6v started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:04:10.030: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:04:10.030: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:04:10.103: INFO: Latency metrics for node master3 Apr 29 22:04:10.103: INFO: Logging node info for node node1 Apr 29 22:04:10.106: INFO: Node Info: &Node{ObjectMeta:{node1 6842a10e-614a-46f0-b405-bc18936b0017 43088 0 2022-04-29 19:59:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 20:11:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:02:57 +0000 UTC,LastTransitionTime:2022-04-29 20:02:57 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:09 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:09 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:09 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:04:09 +0000 UTC,LastTransitionTime:2022-04-29 20:00:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2a0958eb1b3044f2963c9e5f2e902173,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fc6a2d14-7726-4aec-9428-6617632ddcbe,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:04:10.106: INFO: Logging kubelet events for node node1 Apr 29 22:04:10.108: INFO: Logging pods the kubelet thinks is on node node1 Apr 29 22:04:10.124: INFO: prometheus-k8s-0 started at 2022-04-29 20:13:38 +0000 UTC (0+4 container statuses recorded) Apr 29 22:04:10.124: INFO: Container config-reloader ready: true, restart count 0 Apr 29 22:04:10.124: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 22:04:10.124: INFO: Container grafana ready: true, restart count 0 Apr 29 22:04:10.124: INFO: Container prometheus ready: true, restart count 1 Apr 29 22:04:10.124: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 started at 2022-04-29 20:16:34 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.124: INFO: Container tas-extender ready: true, restart count 0 Apr 29 22:04:10.124: INFO: node-feature-discovery-worker-kbl9s started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.124: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:04:10.124: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.124: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:04:10.124: INFO: cmk-init-discover-node1-gxlbt started at 2022-04-29 20:11:43 +0000 UTC (0+3 container statuses recorded) Apr 29 22:04:10.124: INFO: Container discover ready: false, restart count 0 Apr 29 22:04:10.124: INFO: Container init ready: false, restart count 0 Apr 29 22:04:10.124: INFO: Container install ready: false, restart count 0 Apr 29 22:04:10.124: INFO: nodeport-test-fwjcj started at 2022-04-29 22:01:54 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.124: INFO: Container nodeport-test ready: true, restart count 0 Apr 29 22:04:10.124: INFO: kube-proxy-v9tgj started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.124: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:04:10.124: INFO: kubernetes-dashboard-785dcbb76d-d2k5n started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.124: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 22:04:10.124: INFO: node-exporter-c8777 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:04:10.124: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:04:10.124: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:04:10.124: INFO: nodeport-test-5t786 started at 2022-04-29 22:01:54 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.124: INFO: Container nodeport-test ready: true, restart count 0 Apr 29 22:04:10.124: INFO: fail-once-local-psphr started at 2022-04-29 22:03:55 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.124: INFO: Container c ready: false, restart count 1 Apr 29 22:04:10.124: INFO: termination-message-container1e70118f-048a-41f1-a1f0-9da04775a1da started at 2022-04-29 22:04:10 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.124: INFO: Container termination-message-container ready: false, restart count 0 Apr 29 22:04:10.124: INFO: kube-flannel-47phs started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:04:10.124: INFO: Init container install-cni ready: true, restart count 2 Apr 29 22:04:10.124: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:04:10.124: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.125: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 22:04:10.125: INFO: collectd-ccgw2 started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded) Apr 29 22:04:10.125: INFO: Container collectd ready: true, restart count 0 Apr 29 22:04:10.125: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:04:10.125: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:04:10.125: INFO: fail-once-local-cw9zx started at 2022-04-29 22:03:55 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.125: INFO: Container c ready: false, restart count 1 Apr 29 22:04:10.125: INFO: execpodrplcj started at 2022-04-29 22:02:00 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.125: INFO: Container agnhost-container ready: true, restart count 0 Apr 29 22:04:10.125: INFO: nginx-proxy-node1 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.125: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:04:10.125: INFO: kube-multus-ds-amd64-kkz4q started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.125: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:04:10.125: INFO: cmk-f5znp started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded) Apr 29 22:04:10.125: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:04:10.125: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:04:10.317: INFO: Latency metrics for node node1 Apr 29 22:04:10.317: INFO: Logging node info for node node2 Apr 29 22:04:10.320: INFO: Node Info: &Node{ObjectMeta:{node2 2f399869-e81b-465d-97b4-806b6186d34a 42894 0 2022-04-29 19:59:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:12:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 20:12:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:12 +0000 UTC,LastTransitionTime:2022-04-29 20:03:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:01 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:01 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:04:01 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:04:01 +0000 UTC,LastTransitionTime:2022-04-29 20:03:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:22c763056cc24e6ba6e8bbadb5113d3d,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:8ca050bd-5d8a-4c59-8e02-41e26864aa92,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:04:10.326: INFO: Logging kubelet events for node node2 Apr 29 22:04:10.329: INFO: Logging pods the kubelet thinks is on node node2 Apr 29 22:04:10.341: INFO: dns-test-83d0d947-3940-42e6-993a-4b253d7f632a started at (0+0 container statuses recorded) Apr 29 22:04:10.341: INFO: node-feature-discovery-worker-jtjjb started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.341: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:04:10.341: INFO: pod-subpath-test-secret-pxc6 started at 2022-04-29 22:03:42 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.341: INFO: Container test-container-subpath-secret-pxc6 ready: true, restart count 0 Apr 29 22:04:10.341: INFO: fail-once-local-xjnbw started at 2022-04-29 22:04:00 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.341: INFO: Container c ready: false, restart count 0 Apr 29 22:04:10.341: INFO: cmk-74bh9 started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded) Apr 29 22:04:10.341: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:04:10.341: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:04:10.341: INFO: node-exporter-tlpmt started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:04:10.341: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:04:10.341: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:04:10.341: INFO: pod-configmaps-6e44b6a5-f7e3-442e-8475-b35da3983dfa started at 2022-04-29 22:03:36 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.341: INFO: Container agnhost-container ready: true, restart count 0 Apr 29 22:04:10.341: INFO: busybox-scheduling-ac6e2d47-b7e6-4eef-a5af-b4aa525bb89e started at 2022-04-29 22:03:59 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.341: INFO: Container busybox-scheduling-ac6e2d47-b7e6-4eef-a5af-b4aa525bb89e ready: true, restart count 0 Apr 29 22:04:10.341: INFO: kube-proxy-k6tv2 started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.341: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:04:10.341: INFO: kube-flannel-dbcj8 started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:04:10.341: INFO: Init container install-cni ready: true, restart count 2 Apr 29 22:04:10.341: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 22:04:10.341: INFO: kube-multus-ds-amd64-7slcd started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.341: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:04:10.341: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.341: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:04:10.341: INFO: fail-once-local-j4s8m started at 2022-04-29 22:04:00 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.341: INFO: Container c ready: false, restart count 0 Apr 29 22:04:10.341: INFO: cmk-init-discover-node2-csdn7 started at 2022-04-29 20:12:03 +0000 UTC (0+3 container statuses recorded) Apr 29 22:04:10.341: INFO: Container discover ready: false, restart count 0 Apr 29 22:04:10.341: INFO: Container init ready: false, restart count 0 Apr 29 22:04:10.341: INFO: Container install ready: false, restart count 0 Apr 29 22:04:10.341: INFO: collectd-zxs8j started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded) Apr 29 22:04:10.341: INFO: Container collectd ready: true, restart count 0 Apr 29 22:04:10.341: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:04:10.341: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:04:10.341: INFO: kube-proxy-mode-detector started at 2022-04-29 22:04:03 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.341: INFO: Container agnhost-container ready: true, restart count 0 Apr 29 22:04:10.341: INFO: nginx-proxy-node2 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.341: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:04:10.342: INFO: cmk-webhook-6c9d5f8578-b9mdv started at 2022-04-29 20:12:26 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.342: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 22:04:10.342: INFO: update-demo-nautilus-g4km9 started at 2022-04-29 22:03:52 +0000 UTC (0+1 container statuses recorded) Apr 29 22:04:10.342: INFO: Container update-demo ready: true, restart count 0 Apr 29 22:04:10.997: INFO: Latency metrics for node node2 Apr 29 22:04:10.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1285" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [136.855 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:04:09.748: Unexpected error: <*errors.errorString | 0xc00414cb40>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31543 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31543 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":3,"skipped":37,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:42.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-pxc6 STEP: Creating a pod to test atomic-volume-subpath Apr 29 22:03:42.144: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-pxc6" in namespace "subpath-6888" to be "Succeeded or Failed" Apr 29 22:03:42.146: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080589ms Apr 29 22:03:44.149: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005376664s Apr 29 22:03:46.153: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008949158s Apr 29 22:03:48.156: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012462144s Apr 29 22:03:50.163: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Running", Reason="", readiness=true. Elapsed: 8.019643052s Apr 29 22:03:52.167: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Running", Reason="", readiness=true. Elapsed: 10.023634643s Apr 29 22:03:54.172: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Running", Reason="", readiness=true. Elapsed: 12.028206617s Apr 29 22:03:56.177: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Running", Reason="", readiness=true. Elapsed: 14.032746612s Apr 29 22:03:58.181: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Running", Reason="", readiness=true. Elapsed: 16.037459946s Apr 29 22:04:00.186: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Running", Reason="", readiness=true. Elapsed: 18.042709002s Apr 29 22:04:02.190: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Running", Reason="", readiness=true. Elapsed: 20.046104471s Apr 29 22:04:04.193: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Running", Reason="", readiness=true. Elapsed: 22.049516357s Apr 29 22:04:06.198: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Running", Reason="", readiness=true. Elapsed: 24.053929568s Apr 29 22:04:08.203: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Running", Reason="", readiness=true. Elapsed: 26.058973791s Apr 29 22:04:10.208: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Running", Reason="", readiness=true. Elapsed: 28.064207921s Apr 29 22:04:12.213: INFO: Pod "pod-subpath-test-secret-pxc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.069049604s STEP: Saw pod success Apr 29 22:04:12.213: INFO: Pod "pod-subpath-test-secret-pxc6" satisfied condition "Succeeded or Failed" Apr 29 22:04:12.215: INFO: Trying to get logs from node node2 pod pod-subpath-test-secret-pxc6 container test-container-subpath-secret-pxc6: STEP: delete the pod Apr 29 22:04:12.239: INFO: Waiting for pod pod-subpath-test-secret-pxc6 to disappear Apr 29 22:04:12.241: INFO: Pod pod-subpath-test-secret-pxc6 no longer exists STEP: Deleting pod pod-subpath-test-secret-pxc6 Apr 29 22:04:12.241: INFO: Deleting pod "pod-subpath-test-secret-pxc6" in namespace "subpath-6888" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:12.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6888" for this suite. • [SLOW TEST:30.145 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":32,"skipped":528,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:09.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 29 22:04:15.032: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:15.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2314" for this suite. • [SLOW TEST:5.093 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":218,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:55.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:19.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6495" for this suite. • [SLOW TEST:24.047 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":8,"skipped":107,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:19.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:19.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5366" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":9,"skipped":125,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:11.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Apr 29 22:04:11.069: INFO: The status of Pod pod-update-activedeadlineseconds-8cf3086c-e8f0-4419-9176-28cb070dddcc is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:13.072: INFO: The status of Pod pod-update-activedeadlineseconds-8cf3086c-e8f0-4419-9176-28cb070dddcc is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:15.073: INFO: The status of Pod pod-update-activedeadlineseconds-8cf3086c-e8f0-4419-9176-28cb070dddcc is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:17.073: INFO: The status of Pod pod-update-activedeadlineseconds-8cf3086c-e8f0-4419-9176-28cb070dddcc is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:19.075: INFO: The status of Pod pod-update-activedeadlineseconds-8cf3086c-e8f0-4419-9176-28cb070dddcc is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 29 22:04:19.590: INFO: Successfully updated pod "pod-update-activedeadlineseconds-8cf3086c-e8f0-4419-9176-28cb070dddcc" Apr 29 22:04:19.590: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-8cf3086c-e8f0-4419-9176-28cb070dddcc" in namespace "pods-8234" to be "terminated due to deadline exceeded" Apr 29 22:04:19.592: INFO: Pod "pod-update-activedeadlineseconds-8cf3086c-e8f0-4419-9176-28cb070dddcc": Phase="Running", Reason="", readiness=true. Elapsed: 1.844061ms Apr 29 22:04:21.594: INFO: Pod "pod-update-activedeadlineseconds-8cf3086c-e8f0-4419-9176-28cb070dddcc": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.00475055s Apr 29 22:04:21.595: INFO: Pod "pod-update-activedeadlineseconds-8cf3086c-e8f0-4419-9176-28cb070dddcc" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:21.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8234" for this suite. • [SLOW TEST:10.571 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":49,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:15.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Apr 29 22:04:15.138: INFO: The status of Pod annotationupdatef584ff81-14fe-4ea4-9f9a-98fd20748058 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:17.142: INFO: The status of Pod annotationupdatef584ff81-14fe-4ea4-9f9a-98fd20748058 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:19.143: INFO: The status of Pod annotationupdatef584ff81-14fe-4ea4-9f9a-98fd20748058 is Running (Ready = true) Apr 29 22:04:19.664: INFO: Successfully updated pod "annotationupdatef584ff81-14fe-4ea4-9f9a-98fd20748058" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:21.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1658" for this suite. • [SLOW TEST:6.587 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":239,"failed":0} S ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:21.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events Apr 29 22:04:21.645: INFO: created test-event-1 Apr 29 22:04:21.648: INFO: created test-event-2 Apr 29 22:04:21.651: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Apr 29 22:04:21.653: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Apr 29 22:04:21.684: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:21.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8438" for this suite. •S ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":5,"skipped":58,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:21.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates Apr 29 22:04:21.809: INFO: created test-podtemplate-1 Apr 29 22:04:21.811: INFO: created test-podtemplate-2 Apr 29 22:04:21.814: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Apr 29 22:04:21.816: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Apr 29 22:04:21.824: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:21.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-2720" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":6,"skipped":117,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:56.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-799.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-799.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-799.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-799.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 22:04:08.251: INFO: DNS probes using dns-test-a504ba7c-643d-4d3d-98f0-ab4023621618 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-799.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-799.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-799.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-799.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 22:04:16.292: INFO: DNS probes using dns-test-83d0d947-3940-42e6-993a-4b253d7f632a succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-799.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-799.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-799.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-799.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 22:04:22.338: INFO: DNS probes using dns-test-8a655ee8-3abc-4a1a-a8f4-7bb09782b479 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:22.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-799" for this suite. • [SLOW TEST:26.162 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":34,"skipped":504,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:52.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Apr 29 22:03:52.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 create -f -' Apr 29 22:03:52.846: INFO: stderr: "" Apr 29 22:03:52.846: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 29 22:03:52.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 29 22:03:52.987: INFO: stderr: "" Apr 29 22:03:52.987: INFO: stdout: "update-demo-nautilus-g4km9 update-demo-nautilus-qbf9b " Apr 29 22:03:52.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-g4km9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 29 22:03:53.146: INFO: stderr: "" Apr 29 22:03:53.146: INFO: stdout: "" Apr 29 22:03:53.146: INFO: update-demo-nautilus-g4km9 is created but not running Apr 29 22:03:58.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 29 22:03:58.319: INFO: stderr: "" Apr 29 22:03:58.319: INFO: stdout: "update-demo-nautilus-g4km9 update-demo-nautilus-qbf9b " Apr 29 22:03:58.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-g4km9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 29 22:03:58.468: INFO: stderr: "" Apr 29 22:03:58.468: INFO: stdout: "true" Apr 29 22:03:58.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-g4km9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 29 22:03:58.628: INFO: stderr: "" Apr 29 22:03:58.628: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 29 22:03:58.628: INFO: validating pod update-demo-nautilus-g4km9 Apr 29 22:03:58.632: INFO: got data: { "image": "nautilus.jpg" } Apr 29 22:03:58.632: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 29 22:03:58.632: INFO: update-demo-nautilus-g4km9 is verified up and running Apr 29 22:03:58.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-qbf9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 29 22:03:58.790: INFO: stderr: "" Apr 29 22:03:58.790: INFO: stdout: "" Apr 29 22:03:58.790: INFO: update-demo-nautilus-qbf9b is created but not running Apr 29 22:04:03.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 29 22:04:03.991: INFO: stderr: "" Apr 29 22:04:03.991: INFO: stdout: "update-demo-nautilus-g4km9 update-demo-nautilus-qbf9b " Apr 29 22:04:03.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-g4km9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 29 22:04:04.164: INFO: stderr: "" Apr 29 22:04:04.164: INFO: stdout: "true" Apr 29 22:04:04.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-g4km9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 29 22:04:04.334: INFO: stderr: "" Apr 29 22:04:04.334: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 29 22:04:04.334: INFO: validating pod update-demo-nautilus-g4km9 Apr 29 22:04:04.337: INFO: got data: { "image": "nautilus.jpg" } Apr 29 22:04:04.337: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 29 22:04:04.337: INFO: update-demo-nautilus-g4km9 is verified up and running Apr 29 22:04:04.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-qbf9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 29 22:04:04.503: INFO: stderr: "" Apr 29 22:04:04.503: INFO: stdout: "true" Apr 29 22:04:04.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-qbf9b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 29 22:04:04.660: INFO: stderr: "" Apr 29 22:04:04.660: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 29 22:04:04.660: INFO: validating pod update-demo-nautilus-qbf9b Apr 29 22:04:04.664: INFO: got data: { "image": "nautilus.jpg" } Apr 29 22:04:04.664: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 29 22:04:04.664: INFO: update-demo-nautilus-qbf9b is verified up and running STEP: scaling down the replication controller Apr 29 22:04:04.673: INFO: scanned /root for discovery docs: Apr 29 22:04:04.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Apr 29 22:04:04.894: INFO: stderr: "" Apr 29 22:04:04.894: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 29 22:04:04.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 29 22:04:05.074: INFO: stderr: "" Apr 29 22:04:05.074: INFO: stdout: "update-demo-nautilus-g4km9 update-demo-nautilus-qbf9b " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 29 22:04:10.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 29 22:04:10.263: INFO: stderr: "" Apr 29 22:04:10.263: INFO: stdout: "update-demo-nautilus-g4km9 " Apr 29 22:04:10.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-g4km9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 29 22:04:10.435: INFO: stderr: "" Apr 29 22:04:10.435: INFO: stdout: "true" Apr 29 22:04:10.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-g4km9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 29 22:04:10.599: INFO: stderr: "" Apr 29 22:04:10.599: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 29 22:04:10.599: INFO: validating pod update-demo-nautilus-g4km9 Apr 29 22:04:10.601: INFO: got data: { "image": "nautilus.jpg" } Apr 29 22:04:10.601: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 29 22:04:10.601: INFO: update-demo-nautilus-g4km9 is verified up and running STEP: scaling up the replication controller Apr 29 22:04:10.610: INFO: scanned /root for discovery docs: Apr 29 22:04:10.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Apr 29 22:04:10.824: INFO: stderr: "" Apr 29 22:04:10.824: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 29 22:04:10.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 29 22:04:10.983: INFO: stderr: "" Apr 29 22:04:10.983: INFO: stdout: "update-demo-nautilus-5fjts update-demo-nautilus-g4km9 " Apr 29 22:04:10.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-5fjts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 29 22:04:11.162: INFO: stderr: "" Apr 29 22:04:11.162: INFO: stdout: "" Apr 29 22:04:11.162: INFO: update-demo-nautilus-5fjts is created but not running Apr 29 22:04:16.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 29 22:04:16.330: INFO: stderr: "" Apr 29 22:04:16.330: INFO: stdout: "update-demo-nautilus-5fjts update-demo-nautilus-g4km9 " Apr 29 22:04:16.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-5fjts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 29 22:04:16.493: INFO: stderr: "" Apr 29 22:04:16.493: INFO: stdout: "" Apr 29 22:04:16.493: INFO: update-demo-nautilus-5fjts is created but not running Apr 29 22:04:21.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 29 22:04:21.670: INFO: stderr: "" Apr 29 22:04:21.670: INFO: stdout: "update-demo-nautilus-5fjts update-demo-nautilus-g4km9 " Apr 29 22:04:21.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-5fjts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 29 22:04:21.836: INFO: stderr: "" Apr 29 22:04:21.836: INFO: stdout: "true" Apr 29 22:04:21.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-5fjts -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 29 22:04:22.004: INFO: stderr: "" Apr 29 22:04:22.004: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 29 22:04:22.004: INFO: validating pod update-demo-nautilus-5fjts Apr 29 22:04:22.007: INFO: got data: { "image": "nautilus.jpg" } Apr 29 22:04:22.007: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 29 22:04:22.007: INFO: update-demo-nautilus-5fjts is verified up and running Apr 29 22:04:22.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-g4km9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 29 22:04:22.175: INFO: stderr: "" Apr 29 22:04:22.175: INFO: stdout: "true" Apr 29 22:04:22.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods update-demo-nautilus-g4km9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 29 22:04:22.346: INFO: stderr: "" Apr 29 22:04:22.346: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Apr 29 22:04:22.346: INFO: validating pod update-demo-nautilus-g4km9 Apr 29 22:04:22.349: INFO: got data: { "image": "nautilus.jpg" } Apr 29 22:04:22.349: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 29 22:04:22.349: INFO: update-demo-nautilus-g4km9 is verified up and running STEP: using delete to clean up resources Apr 29 22:04:22.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 delete --grace-period=0 --force -f -' Apr 29 22:04:22.476: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 22:04:22.476: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 29 22:04:22.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get rc,svc -l name=update-demo --no-headers' Apr 29 22:04:22.673: INFO: stderr: "No resources found in kubectl-972 namespace.\n" Apr 29 22:04:22.673: INFO: stdout: "" Apr 29 22:04:22.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-972 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 29 22:04:22.844: INFO: stderr: "" Apr 29 22:04:22.844: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:22.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-972" for this suite. • [SLOW TEST:30.381 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:12.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:23.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8327" for this suite. • [SLOW TEST:11.063 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":33,"skipped":531,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:19.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 22:04:19.962: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 22:04:21.972: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866659, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866659, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866659, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866659, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:04:24.984: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 29 22:04:24.998: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:25.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2185" for this suite. STEP: Destroying namespace "webhook-2185-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.633 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":10,"skipped":130,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:25.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 29 22:04:30.121: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:30.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4712" for this suite. • [SLOW TEST:5.067 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":146,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:30.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-d1b5e044-2197-4e1a-ac99-e317854cc0a4 STEP: Creating a pod to test consume secrets Apr 29 22:04:30.187: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a83c72d6-a360-4de2-b9d3-fd7b51f09c0e" in namespace "projected-8960" to be "Succeeded or Failed" Apr 29 22:04:30.189: INFO: Pod "pod-projected-secrets-a83c72d6-a360-4de2-b9d3-fd7b51f09c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2955ms Apr 29 22:04:32.192: INFO: Pod "pod-projected-secrets-a83c72d6-a360-4de2-b9d3-fd7b51f09c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005250241s Apr 29 22:04:34.198: INFO: Pod "pod-projected-secrets-a83c72d6-a360-4de2-b9d3-fd7b51f09c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010682201s Apr 29 22:04:36.201: INFO: Pod "pod-projected-secrets-a83c72d6-a360-4de2-b9d3-fd7b51f09c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014393376s Apr 29 22:04:38.206: INFO: Pod "pod-projected-secrets-a83c72d6-a360-4de2-b9d3-fd7b51f09c0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01916609s STEP: Saw pod success Apr 29 22:04:38.206: INFO: Pod "pod-projected-secrets-a83c72d6-a360-4de2-b9d3-fd7b51f09c0e" satisfied condition "Succeeded or Failed" Apr 29 22:04:38.209: INFO: Trying to get logs from node node2 pod pod-projected-secrets-a83c72d6-a360-4de2-b9d3-fd7b51f09c0e container projected-secret-volume-test: STEP: delete the pod Apr 29 22:04:38.222: INFO: Waiting for pod pod-projected-secrets-a83c72d6-a360-4de2-b9d3-fd7b51f09c0e to disappear Apr 29 22:04:38.224: INFO: Pod pod-projected-secrets-a83c72d6-a360-4de2-b9d3-fd7b51f09c0e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:38.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8960" for this suite. • [SLOW TEST:8.079 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":153,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:23.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 29 22:04:23.361: INFO: Waiting up to 5m0s for pod "pod-10b9c32e-6ed2-4d1b-811a-c8c986e50a66" in namespace "emptydir-8525" to be "Succeeded or Failed" Apr 29 22:04:23.363: INFO: Pod "pod-10b9c32e-6ed2-4d1b-811a-c8c986e50a66": Phase="Pending", Reason="", readiness=false. Elapsed: 1.865487ms Apr 29 22:04:25.367: INFO: Pod "pod-10b9c32e-6ed2-4d1b-811a-c8c986e50a66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005622055s Apr 29 22:04:27.370: INFO: Pod "pod-10b9c32e-6ed2-4d1b-811a-c8c986e50a66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009121394s Apr 29 22:04:29.376: INFO: Pod "pod-10b9c32e-6ed2-4d1b-811a-c8c986e50a66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014323767s Apr 29 22:04:31.380: INFO: Pod "pod-10b9c32e-6ed2-4d1b-811a-c8c986e50a66": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018214042s Apr 29 22:04:33.384: INFO: Pod "pod-10b9c32e-6ed2-4d1b-811a-c8c986e50a66": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022639216s Apr 29 22:04:35.389: INFO: Pod "pod-10b9c32e-6ed2-4d1b-811a-c8c986e50a66": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027968405s Apr 29 22:04:37.393: INFO: Pod "pod-10b9c32e-6ed2-4d1b-811a-c8c986e50a66": Phase="Pending", Reason="", readiness=false. Elapsed: 14.031840408s Apr 29 22:04:39.398: INFO: Pod "pod-10b9c32e-6ed2-4d1b-811a-c8c986e50a66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.036349608s STEP: Saw pod success Apr 29 22:04:39.398: INFO: Pod "pod-10b9c32e-6ed2-4d1b-811a-c8c986e50a66" satisfied condition "Succeeded or Failed" Apr 29 22:04:39.400: INFO: Trying to get logs from node node2 pod pod-10b9c32e-6ed2-4d1b-811a-c8c986e50a66 container test-container: STEP: delete the pod Apr 29 22:04:39.414: INFO: Waiting for pod pod-10b9c32e-6ed2-4d1b-811a-c8c986e50a66 to disappear Apr 29 22:04:39.416: INFO: Pod pod-10b9c32e-6ed2-4d1b-811a-c8c986e50a66 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:39.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8525" for this suite. • [SLOW TEST:16.097 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":21,"skipped":548,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:22.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:04:22.877: INFO: Creating deployment "webserver-deployment" Apr 29 22:04:22.881: INFO: Waiting for observed generation 1 Apr 29 22:04:24.889: INFO: Waiting for all required pods to come up Apr 29 22:04:24.893: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 29 22:04:38.904: INFO: Waiting for deployment "webserver-deployment" to complete Apr 29 22:04:38.909: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 29 22:04:38.916: INFO: Updating deployment webserver-deployment Apr 29 22:04:38.916: INFO: Waiting for observed generation 2 Apr 29 22:04:40.921: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 29 22:04:40.923: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 29 22:04:40.925: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 29 22:04:40.931: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 29 22:04:40.931: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 29 22:04:40.933: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 29 22:04:40.937: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 29 22:04:40.937: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 29 22:04:40.943: INFO: Updating deployment webserver-deployment Apr 29 22:04:40.943: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 29 22:04:40.947: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 29 22:04:40.949: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Apr 29 22:04:40.954: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6269 76817f58-eb86-404a-bc60-f61be87aa100 44138 3 2022-04-29 22:04:22 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-29 22:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-29 22:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00510d418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-29 22:04:34 +0000 UTC,LastTransitionTime:2022-04-29 22:04:34 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2022-04-29 22:04:38 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 29 22:04:40.957: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-6269 893600c9-f2b9-434a-aef4-55c621e415b4 44141 3 2022-04-29 22:04:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 76817f58-eb86-404a-bc60-f61be87aa100 0xc00416ab57 0xc00416ab58}] [] [{kube-controller-manager Update apps/v1 2022-04-29 22:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"76817f58-eb86-404a-bc60-f61be87aa100\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00416abd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 29 22:04:40.957: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 29 22:04:40.957: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-6269 6e99b32c-749b-4668-9f0c-a3282b517135 44139 3 2022-04-29 22:04:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 76817f58-eb86-404a-bc60-f61be87aa100 0xc00416ac37 0xc00416ac38}] [] [{kube-controller-manager Update apps/v1 2022-04-29 22:04:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"76817f58-eb86-404a-bc60-f61be87aa100\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00416aca8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 29 22:04:40.962: INFO: Pod "webserver-deployment-795d758f88-9r5db" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9r5db webserver-deployment-795d758f88- deployment-6269 203ce424-e8f9-41d7-a32b-d2540137d078 44107 0 2022-04-29 22:04:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 893600c9-f2b9-434a-aef4-55c621e415b4 0xc00510d7bf 0xc00510d7d0}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"893600c9-f2b9-434a-aef4-55c621e415b4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-04-29 22:04:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bfj2z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bfj2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-04-29 22:04:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:04:40.962: INFO: Pod "webserver-deployment-795d758f88-fzk72" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-fzk72 webserver-deployment-795d758f88- deployment-6269 7fb5003f-6e89-4802-8862-93cc14d25bb6 44109 0 2022-04-29 22:04:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 893600c9-f2b9-434a-aef4-55c621e415b4 0xc00510d99f 0xc00510d9b0}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"893600c9-f2b9-434a-aef4-55c621e415b4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8ldqk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8ldqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:04:40.963: INFO: Pod "webserver-deployment-795d758f88-h9mbc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-h9mbc webserver-deployment-795d758f88- deployment-6269 a3cf3525-ce4b-4247-9c80-cb126bbd4ae9 44137 0 2022-04-29 22:04:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 893600c9-f2b9-434a-aef4-55c621e415b4 0xc00510db1f 0xc00510db30}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"893600c9-f2b9-434a-aef4-55c621e415b4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-04-29 22:04:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xv7cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xv7cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-04-29 22:04:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:04:40.963: INFO: Pod "webserver-deployment-795d758f88-k9jct" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-k9jct webserver-deployment-795d758f88- deployment-6269 e634a35d-1c0d-49e1-a179-0dc0aa889a75 44087 0 2022-04-29 22:04:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 893600c9-f2b9-434a-aef4-55c621e415b4 0xc00510dcff 0xc00510dd10}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"893600c9-f2b9-434a-aef4-55c621e415b4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z9p4c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z9p4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:04:40.963: INFO: Pod "webserver-deployment-795d758f88-kx66h" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-kx66h webserver-deployment-795d758f88- deployment-6269 61293970-d26b-4542-9582-75c511bead32 44100 0 2022-04-29 22:04:38 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 893600c9-f2b9-434a-aef4-55c621e415b4 0xc00510de7f 0xc00510de90}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"893600c9-f2b9-434a-aef4-55c621e415b4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-04-29 22:04:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ctnrb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ctnrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-04-29 22:04:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:04:40.964: INFO: Pod "webserver-deployment-847dcfb7fb-2g774" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2g774 webserver-deployment-847dcfb7fb- deployment-6269 563e0732-7fdd-415a-8c19-bc1908e1d87e 43991 0 2022-04-29 22:04:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.153" ], "mac": "26:13:b3:65:13:30", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.153" ], "mac": "26:13:b3:65:13:30", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 6e99b32c-749b-4668-9f0c-a3282b517135 0xc003bce05f 0xc003bce070}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e99b32c-749b-4668-9f0c-a3282b517135\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-29 22:04:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-29 22:04:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.153\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-drxfr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drxfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.153,StartTime:2022-04-29 22:04:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 22:04:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://1de282ed4b6db7865a03057ba0b8fd0bdcade886df8129f0ffe12d85e8ff7701,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.153,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:04:40.964: INFO: Pod "webserver-deployment-847dcfb7fb-7brtp" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-7brtp webserver-deployment-847dcfb7fb- deployment-6269 c925bdd5-758f-4a4b-a280-4c5924cb6b1d 43868 0 2022-04-29 22:04:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.195" ], "mac": "da:7d:47:93:7a:1c", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.195" ], "mac": "da:7d:47:93:7a:1c", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 6e99b32c-749b-4668-9f0c-a3282b517135 0xc003bce25f 0xc003bce270}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e99b32c-749b-4668-9f0c-a3282b517135\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-29 22:04:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-29 22:04:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.195\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gj4b8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gj4b8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.195,StartTime:2022-04-29 22:04:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 22:04:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://e43880450d41de8325fc65880c6d2f28b14270b059c664fb0b7ba1e9cf4ce6a3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.195,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:04:40.964: INFO: Pod "webserver-deployment-847dcfb7fb-h6gzq" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-h6gzq webserver-deployment-847dcfb7fb- deployment-6269 cff4e355-e857-4b7c-a404-dfe3771b5a76 43877 0 2022-04-29 22:04:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.197" ], "mac": "06:6f:5f:2f:95:1a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.197" ], "mac": "06:6f:5f:2f:95:1a", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 6e99b32c-749b-4668-9f0c-a3282b517135 0xc003bce45f 0xc003bce470}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e99b32c-749b-4668-9f0c-a3282b517135\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-29 22:04:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-29 22:04:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.197\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8847d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8847d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.197,StartTime:2022-04-29 22:04:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 22:04:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://5d9630d5ad717154c517ab447a7315dc0ccf488668b706c156b58d7378a7a046,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.197,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:04:40.965: INFO: Pod "webserver-deployment-847dcfb7fb-lxcf6" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-lxcf6 webserver-deployment-847dcfb7fb- deployment-6269 2bdfc5cd-d61c-4d52-8313-e185e290e639 44009 0 2022-04-29 22:04:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.154" ], "mac": "e2:a4:2b:b6:6b:65", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.154" ], "mac": "e2:a4:2b:b6:6b:65", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 6e99b32c-749b-4668-9f0c-a3282b517135 0xc003bce65f 0xc003bce670}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e99b32c-749b-4668-9f0c-a3282b517135\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-29 22:04:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-29 22:04:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.154\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jm4wl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jm4wl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.154,StartTime:2022-04-29 22:04:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 22:04:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://209260a5b64d8c29b0f1b5582654d1def6928aa4d1e3f72c06b158300b59be57,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.154,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:04:40.965: INFO: Pod "webserver-deployment-847dcfb7fb-m5tcs" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-m5tcs webserver-deployment-847dcfb7fb- deployment-6269 a10c7cc5-1ab4-4760-837b-0fe5111c4844 43871 0 2022-04-29 22:04:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.194" ], "mac": "82:2d:de:3c:66:3b", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.194" ], "mac": "82:2d:de:3c:66:3b", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 6e99b32c-749b-4668-9f0c-a3282b517135 0xc003bce85f 0xc003bce870}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e99b32c-749b-4668-9f0c-a3282b517135\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-29 22:04:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-29 22:04:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.194\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kpm65,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kpm65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.194,StartTime:2022-04-29 22:04:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 22:04:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://b44ff979d1623f1e39e1ca900ca056de62995e93ef87c37e76b4251e0bfc8073,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:04:40.965: INFO: Pod "webserver-deployment-847dcfb7fb-pv6gk" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-pv6gk webserver-deployment-847dcfb7fb- deployment-6269 4c545695-b733-4cde-9b92-afe5dfd25485 43978 0 2022-04-29 22:04:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.150" ], "mac": "0e:83:eb:64:c6:7a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.150" ], "mac": "0e:83:eb:64:c6:7a", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 6e99b32c-749b-4668-9f0c-a3282b517135 0xc003bcea5f 0xc003bcea70}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e99b32c-749b-4668-9f0c-a3282b517135\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-29 22:04:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-29 22:04:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.150\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lfz7s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lfz7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.150,StartTime:2022-04-29 22:04:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 22:04:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://0d976d62703ea698cf1cfbb842740d264881456cbbd83d78a46f72b25f6d706e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.150,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:04:40.966: INFO: Pod "webserver-deployment-847dcfb7fb-rd8f2" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-rd8f2 webserver-deployment-847dcfb7fb- deployment-6269 cc9ff58f-a9c9-404e-8182-05326ec78836 43961 0 2022-04-29 22:04:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.151" ], "mac": "2a:d6:12:a3:d5:f8", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.151" ], "mac": "2a:d6:12:a3:d5:f8", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 6e99b32c-749b-4668-9f0c-a3282b517135 0xc003bcec5f 0xc003bcec70}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e99b32c-749b-4668-9f0c-a3282b517135\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-29 22:04:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-29 22:04:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.151\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sbrtw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sbrtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.151,StartTime:2022-04-29 22:04:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 22:04:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://4281edc43ee968687f5c79532516546e7d4382feba02e8acdc1245660fdcc2cb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.151,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:04:40.966: INFO: Pod "webserver-deployment-847dcfb7fb-wgcff" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-wgcff webserver-deployment-847dcfb7fb- deployment-6269 d7a60fa9-67ba-4a0c-aca1-da0c671f9736 44144 0 2022-04-29 22:04:40 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 6e99b32c-749b-4668-9f0c-a3282b517135 0xc003bcee5f 0xc003bcee70}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e99b32c-749b-4668-9f0c-a3282b517135\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vhshj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vhshj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:04:40.966: INFO: Pod "webserver-deployment-847dcfb7fb-xwjbm" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-xwjbm webserver-deployment-847dcfb7fb- deployment-6269 480b668c-949f-463a-9187-c4ad15a21450 43874 0 2022-04-29 22:04:22 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.196" ], "mac": "62:50:41:0e:1f:56", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.196" ], "mac": "62:50:41:0e:1f:56", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 6e99b32c-749b-4668-9f0c-a3282b517135 0xc003bcef9f 0xc003bcefb0}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e99b32c-749b-4668-9f0c-a3282b517135\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-29 22:04:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-29 22:04:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.196\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hg62l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hg62l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.196,StartTime:2022-04-29 22:04:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 22:04:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://79c2b3ce7f790f68b18b7fb920c018b3ae51a52da114acd9ca7a15337d0dd0f3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.196,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:40.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6269" for this suite. • [SLOW TEST:18.123 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":22,"skipped":548,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:21.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 29 22:04:21.775: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:45.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3120" for this suite. • [SLOW TEST:23.448 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":275,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:22.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 29 22:04:22.853: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 29 22:04:24.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:04:26.866: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:04:28.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:04:30.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:04:32.866: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:04:34.866: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866662, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:04:37.876: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:04:37.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:45.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5361" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:23.610 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":35,"skipped":510,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:41.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:04:41.035: INFO: The status of Pod busybox-readonly-fs7e7840ee-37a2-4074-a3f7-0fbba59a8903 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:43.039: INFO: The status of Pod busybox-readonly-fs7e7840ee-37a2-4074-a3f7-0fbba59a8903 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:45.039: INFO: The status of Pod busybox-readonly-fs7e7840ee-37a2-4074-a3f7-0fbba59a8903 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:47.040: INFO: The status of Pod busybox-readonly-fs7e7840ee-37a2-4074-a3f7-0fbba59a8903 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:49.039: INFO: The status of Pod busybox-readonly-fs7e7840ee-37a2-4074-a3f7-0fbba59a8903 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:51.037: INFO: The status of Pod busybox-readonly-fs7e7840ee-37a2-4074-a3f7-0fbba59a8903 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:51.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4368" for this suite. • [SLOW TEST:10.048 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a read only busybox container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":560,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:38.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4934.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4934.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 22:04:52.306: INFO: DNS probes using dns-4934/dns-test-0810fd5a-4626-4e9b-9c40-2997e00438c4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:52.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4934" for this suite. • [SLOW TEST:14.084 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":13,"skipped":155,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:51.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 29 22:04:51.088: INFO: Waiting up to 5m0s for pod "pod-eda1990b-61ad-487f-a6bd-852e74899962" in namespace "emptydir-5676" to be "Succeeded or Failed" Apr 29 22:04:51.090: INFO: Pod "pod-eda1990b-61ad-487f-a6bd-852e74899962": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11339ms Apr 29 22:04:53.093: INFO: Pod "pod-eda1990b-61ad-487f-a6bd-852e74899962": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005088017s Apr 29 22:04:55.096: INFO: Pod "pod-eda1990b-61ad-487f-a6bd-852e74899962": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008033183s STEP: Saw pod success Apr 29 22:04:55.096: INFO: Pod "pod-eda1990b-61ad-487f-a6bd-852e74899962" satisfied condition "Succeeded or Failed" Apr 29 22:04:55.098: INFO: Trying to get logs from node node2 pod pod-eda1990b-61ad-487f-a6bd-852e74899962 container test-container: STEP: delete the pod Apr 29 22:04:55.110: INFO: Waiting for pod pod-eda1990b-61ad-487f-a6bd-852e74899962 to disappear Apr 29 22:04:55.113: INFO: Pod pod-eda1990b-61ad-487f-a6bd-852e74899962 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:55.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5676" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":561,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:52.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 29 22:04:58.407: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:58.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7805" for this suite. • [SLOW TEST:6.080 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":168,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:55.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 22:04:55.180: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d58f02a9-d90b-4c55-bda7-296977af9c54" in namespace "downward-api-2668" to be "Succeeded or Failed" Apr 29 22:04:55.183: INFO: Pod "downwardapi-volume-d58f02a9-d90b-4c55-bda7-296977af9c54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.504495ms Apr 29 22:04:57.186: INFO: Pod "downwardapi-volume-d58f02a9-d90b-4c55-bda7-296977af9c54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006284026s Apr 29 22:04:59.190: INFO: Pod "downwardapi-volume-d58f02a9-d90b-4c55-bda7-296977af9c54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009887052s STEP: Saw pod success Apr 29 22:04:59.190: INFO: Pod "downwardapi-volume-d58f02a9-d90b-4c55-bda7-296977af9c54" satisfied condition "Succeeded or Failed" Apr 29 22:04:59.193: INFO: Trying to get logs from node node2 pod downwardapi-volume-d58f02a9-d90b-4c55-bda7-296977af9c54 container client-container: STEP: delete the pod Apr 29 22:04:59.205: INFO: Waiting for pod downwardapi-volume-d58f02a9-d90b-4c55-bda7-296977af9c54 to disappear Apr 29 22:04:59.207: INFO: Pod downwardapi-volume-d58f02a9-d90b-4c55-bda7-296977af9c54 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:04:59.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2668" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":566,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:39.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-2220 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 29 22:04:39.486: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 29 22:04:39.517: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:41.521: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:43.520: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:45.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:04:47.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:04:49.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:04:51.521: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:04:53.522: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:04:55.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:04:57.521: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:04:59.522: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:05:01.519: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 29 22:05:01.524: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 29 22:05:05.545: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Apr 29 22:05:05.545: INFO: Breadth first check of 10.244.3.201 on host 10.10.190.207... Apr 29 22:05:05.547: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.172:9080/dial?request=hostname&protocol=http&host=10.244.3.201&port=8080&tries=1'] Namespace:pod-network-test-2220 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 22:05:05.547: INFO: >>> kubeConfig: /root/.kube/config Apr 29 22:05:05.633: INFO: Waiting for responses: map[] Apr 29 22:05:05.633: INFO: reached 10.244.3.201 after 0/1 tries Apr 29 22:05:05.633: INFO: Breadth first check of 10.244.4.163 on host 10.10.190.208... Apr 29 22:05:05.635: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.172:9080/dial?request=hostname&protocol=http&host=10.244.4.163&port=8080&tries=1'] Namespace:pod-network-test-2220 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 22:05:05.635: INFO: >>> kubeConfig: /root/.kube/config Apr 29 22:05:05.727: INFO: Waiting for responses: map[] Apr 29 22:05:05.727: INFO: reached 10.244.4.163 after 0/1 tries Apr 29 22:05:05.727: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:05.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2220" for this suite. • [SLOW TEST:26.269 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:03:36.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-e08c3d27-8dec-4c48-8c7f-c59f32499fc6 STEP: Creating the pod Apr 29 22:03:36.278: INFO: The status of Pod pod-configmaps-6e44b6a5-f7e3-442e-8475-b35da3983dfa is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:03:38.282: INFO: The status of Pod pod-configmaps-6e44b6a5-f7e3-442e-8475-b35da3983dfa is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:03:40.284: INFO: The status of Pod pod-configmaps-6e44b6a5-f7e3-442e-8475-b35da3983dfa is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:03:42.282: INFO: The status of Pod pod-configmaps-6e44b6a5-f7e3-442e-8475-b35da3983dfa is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:03:44.284: INFO: The status of Pod pod-configmaps-6e44b6a5-f7e3-442e-8475-b35da3983dfa is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:03:46.283: INFO: The status of Pod pod-configmaps-6e44b6a5-f7e3-442e-8475-b35da3983dfa is Running (Ready = true) STEP: Updating configmap configmap-test-upd-e08c3d27-8dec-4c48-8c7f-c59f32499fc6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:07.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2998" for this suite. • [SLOW TEST:90.954 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:59.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:04:59.252: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:07.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7454" for this suite. • [SLOW TEST:8.140 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":26,"skipped":576,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:05.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Apr 29 22:05:06.050: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 22:05:06.067: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 22:05:08.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866706, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866706, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866706, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866706, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:05:11.088: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:11.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-22" for this suite. STEP: Destroying namespace "webhook-22-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.424 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":36,"skipped":585,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:07.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-2b5ebb56-9033-4ff2-8303-49a58fb2295b STEP: Creating a pod to test consume configMaps Apr 29 22:05:07.430: INFO: Waiting up to 5m0s for pod "pod-configmaps-a425eb72-ab2d-45a9-9fb8-4f49ce1f3f8a" in namespace "configmap-7564" to be "Succeeded or Failed" Apr 29 22:05:07.433: INFO: Pod "pod-configmaps-a425eb72-ab2d-45a9-9fb8-4f49ce1f3f8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.390332ms Apr 29 22:05:09.437: INFO: Pod "pod-configmaps-a425eb72-ab2d-45a9-9fb8-4f49ce1f3f8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006026358s Apr 29 22:05:11.440: INFO: Pod "pod-configmaps-a425eb72-ab2d-45a9-9fb8-4f49ce1f3f8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009816848s STEP: Saw pod success Apr 29 22:05:11.440: INFO: Pod "pod-configmaps-a425eb72-ab2d-45a9-9fb8-4f49ce1f3f8a" satisfied condition "Succeeded or Failed" Apr 29 22:05:11.442: INFO: Trying to get logs from node node2 pod pod-configmaps-a425eb72-ab2d-45a9-9fb8-4f49ce1f3f8a container agnhost-container: STEP: delete the pod Apr 29 22:05:11.455: INFO: Waiting for pod pod-configmaps-a425eb72-ab2d-45a9-9fb8-4f49ce1f3f8a to disappear Apr 29 22:05:11.457: INFO: Pod pod-configmaps-a425eb72-ab2d-45a9-9fb8-4f49ce1f3f8a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:11.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7564" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":588,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:11.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token Apr 29 22:05:12.038: INFO: created pod pod-service-account-defaultsa Apr 29 22:05:12.038: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 29 22:05:12.047: INFO: created pod pod-service-account-mountsa Apr 29 22:05:12.047: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 29 22:05:12.055: INFO: created pod pod-service-account-nomountsa Apr 29 22:05:12.055: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 29 22:05:12.065: INFO: created pod pod-service-account-defaultsa-mountspec Apr 29 22:05:12.065: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 29 22:05:12.074: INFO: created pod pod-service-account-mountsa-mountspec Apr 29 22:05:12.074: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 29 22:05:12.083: INFO: created pod pod-service-account-nomountsa-mountspec Apr 29 22:05:12.083: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 29 22:05:12.093: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 29 22:05:12.093: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 29 22:05:12.102: INFO: created pod pod-service-account-mountsa-nomountspec Apr 29 22:05:12.102: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 29 22:05:12.111: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 29 22:05:12.111: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:12.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8502" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":28,"skipped":603,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:12.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:12.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5664" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":29,"skipped":623,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:45.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:04:46.026: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 29 22:04:51.029: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 29 22:04:53.035: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 29 22:04:55.039: INFO: Creating deployment "test-rollover-deployment" Apr 29 22:04:55.046: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 29 22:04:57.054: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 29 22:04:57.060: INFO: Ensure that both replica sets have 1 created replica Apr 29 22:04:57.065: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 29 22:04:57.074: INFO: Updating deployment test-rollover-deployment Apr 29 22:04:57.074: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 29 22:04:59.082: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 29 22:04:59.087: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 29 22:04:59.091: INFO: all replica sets need to contain the pod-template-hash label Apr 29 22:04:59.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866697, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:05:01.097: INFO: all replica sets need to contain the pod-template-hash label Apr 29 22:05:01.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866697, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:05:03.097: INFO: all replica sets need to contain the pod-template-hash label Apr 29 22:05:03.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866702, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:05:05.099: INFO: all replica sets need to contain the pod-template-hash label Apr 29 22:05:05.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866702, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:05:07.099: INFO: all replica sets need to contain the pod-template-hash label Apr 29 22:05:07.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866702, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:05:09.098: INFO: all replica sets need to contain the pod-template-hash label Apr 29 22:05:09.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866702, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:05:11.097: INFO: all replica sets need to contain the pod-template-hash label Apr 29 22:05:11.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866702, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866695, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:05:13.098: INFO: Apr 29 22:05:13.098: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Apr 29 22:05:13.106: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4752 d8081a47-4682-41b8-8fd8-cde06aff97fb 45106 2 2022-04-29 22:04:55 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-29 22:04:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-29 22:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0003acf18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-29 22:04:55 +0000 UTC,LastTransitionTime:2022-04-29 22:04:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2022-04-29 22:05:12 +0000 UTC,LastTransitionTime:2022-04-29 22:04:55 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 29 22:05:13.110: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-4752 c007f270-a862-4040-8548-c041d4e9ccd5 45097 2 2022-04-29 22:04:57 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment d8081a47-4682-41b8-8fd8-cde06aff97fb 0xc000ad69f0 0xc000ad69f1}] [] [{kube-controller-manager Update apps/v1 2022-04-29 22:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d8081a47-4682-41b8-8fd8-cde06aff97fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000ad6ad8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 29 22:05:13.110: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 29 22:05:13.110: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4752 6616c65f-3176-4f09-a204-ad07e3207bfb 45105 2 2022-04-29 22:04:46 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment d8081a47-4682-41b8-8fd8-cde06aff97fb 0xc000ad66a7 0xc000ad66a8}] [] [{e2e.test Update apps/v1 2022-04-29 22:04:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-29 22:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d8081a47-4682-41b8-8fd8-cde06aff97fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000ad6778 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 29 22:05:13.110: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-4752 0d3eac0a-1a94-42e7-a9f9-c2b044e3ef83 44626 2 2022-04-29 22:04:55 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment d8081a47-4682-41b8-8fd8-cde06aff97fb 0xc000ad6857 0xc000ad6858}] [] [{kube-controller-manager Update apps/v1 2022-04-29 22:04:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d8081a47-4682-41b8-8fd8-cde06aff97fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000ad6938 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 29 22:05:13.113: INFO: Pod "test-rollover-deployment-98c5f4599-l74hd" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-l74hd test-rollover-deployment-98c5f4599- deployment-4752 cc31b098-a42e-4cbc-81bf-123c0b350e58 44755 0 2022-04-29 22:04:57 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.169" ], "mac": "52:ca:51:b7:bc:fd", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.169" ], "mac": "52:ca:51:b7:bc:fd", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 c007f270-a862-4040-8548-c041d4e9ccd5 0xc004e5968f 0xc004e596a0}] [] [{kube-controller-manager Update v1 2022-04-29 22:04:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c007f270-a862-4040-8548-c041d4e9ccd5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-29 22:04:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-29 22:05:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.169\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9lt4n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9lt4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:05:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:05:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:04:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.169,StartTime:2022-04-29 22:04:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 22:05:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://c71a3e6784673fe9a1a91b877ffe5cd42bb9e18c76d3ad0c4b1f96ef52e4eca8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.169,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:13.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4752" for this suite. • [SLOW TEST:27.128 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":36,"skipped":512,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:58.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 22:04:59.175: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 22:05:01.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866699, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866699, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866699, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866699, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:05:03.186: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866699, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866699, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866699, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866699, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:05:06.196: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:05:06.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9304-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:14.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1147" for this suite. STEP: Destroying namespace "webhook-1147-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.829 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":15,"skipped":179,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:07.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:05:07.261: INFO: Creating ReplicaSet my-hostname-basic-f15f1744-f59a-425a-a64c-fb0454ebf72e Apr 29 22:05:07.268: INFO: Pod name my-hostname-basic-f15f1744-f59a-425a-a64c-fb0454ebf72e: Found 0 pods out of 1 Apr 29 22:05:12.272: INFO: Pod name my-hostname-basic-f15f1744-f59a-425a-a64c-fb0454ebf72e: Found 1 pods out of 1 Apr 29 22:05:12.272: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f15f1744-f59a-425a-a64c-fb0454ebf72e" is running Apr 29 22:05:12.274: INFO: Pod "my-hostname-basic-f15f1744-f59a-425a-a64c-fb0454ebf72e-zt4ww" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 22:05:07 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 22:05:10 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 22:05:10 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 22:05:07 +0000 UTC Reason: Message:}]) Apr 29 22:05:12.275: INFO: Trying to dial the pod Apr 29 22:05:17.286: INFO: Controller my-hostname-basic-f15f1744-f59a-425a-a64c-fb0454ebf72e: Got expected result from replica 1 [my-hostname-basic-f15f1744-f59a-425a-a64c-fb0454ebf72e-zt4ww]: "my-hostname-basic-f15f1744-f59a-425a-a64c-fb0454ebf72e-zt4ww", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:17.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9608" for this suite. • [SLOW TEST:10.054 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":12,"skipped":214,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:21.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:21.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3501" for this suite. • [SLOW TEST:60.041 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":122,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:13.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Apr 29 22:05:13.176: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:26.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9511" for this suite. • [SLOW TEST:13.114 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":37,"skipped":529,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:26.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:26.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5418" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":543,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:14.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:05:14.325: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 29 22:05:22.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9804 --namespace=crd-publish-openapi-9804 create -f -' Apr 29 22:05:23.469: INFO: stderr: "" Apr 29 22:05:23.470: INFO: stdout: "e2e-test-crd-publish-openapi-6189-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 29 22:05:23.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9804 --namespace=crd-publish-openapi-9804 delete e2e-test-crd-publish-openapi-6189-crds test-cr' Apr 29 22:05:23.640: INFO: stderr: "" Apr 29 22:05:23.640: INFO: stdout: "e2e-test-crd-publish-openapi-6189-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 29 22:05:23.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9804 --namespace=crd-publish-openapi-9804 apply -f -' Apr 29 22:05:23.916: INFO: stderr: "" Apr 29 22:05:23.916: INFO: stdout: "e2e-test-crd-publish-openapi-6189-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 29 22:05:23.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9804 --namespace=crd-publish-openapi-9804 delete e2e-test-crd-publish-openapi-6189-crds test-cr' Apr 29 22:05:24.090: INFO: stderr: "" Apr 29 22:05:24.090: INFO: stdout: "e2e-test-crd-publish-openapi-6189-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 29 22:05:24.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9804 explain e2e-test-crd-publish-openapi-6189-crds' Apr 29 22:05:24.417: INFO: stderr: "" Apr 29 22:05:24.417: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6189-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:28.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9804" for this suite. • [SLOW TEST:13.777 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":16,"skipped":192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:28.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info Apr 29 22:05:28.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4932 cluster-info' Apr 29 22:05:28.312: INFO: stderr: "" Apr 29 22:05:28.312: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:28.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4932" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":17,"skipped":217,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:28.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Apr 29 22:05:28.378: INFO: starting watch STEP: patching STEP: updating Apr 29 22:05:28.386: INFO: waiting for watch events with expected annotations Apr 29 22:05:28.386: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:28.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-5603" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:26.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-26ad1d11-afe3-47bb-915a-d5c3489ff293 STEP: Creating a pod to test consume configMaps Apr 29 22:05:26.397: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0bc07981-9cc9-4a5d-ab3d-109622972be1" in namespace "projected-354" to be "Succeeded or Failed" Apr 29 22:05:26.399: INFO: Pod "pod-projected-configmaps-0bc07981-9cc9-4a5d-ab3d-109622972be1": Phase="Pending", Reason="", readiness=false. Elapsed: 1.992703ms Apr 29 22:05:28.402: INFO: Pod "pod-projected-configmaps-0bc07981-9cc9-4a5d-ab3d-109622972be1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004891247s Apr 29 22:05:30.409: INFO: Pod "pod-projected-configmaps-0bc07981-9cc9-4a5d-ab3d-109622972be1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011243338s Apr 29 22:05:32.412: INFO: Pod "pod-projected-configmaps-0bc07981-9cc9-4a5d-ab3d-109622972be1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014888813s Apr 29 22:05:34.418: INFO: Pod "pod-projected-configmaps-0bc07981-9cc9-4a5d-ab3d-109622972be1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021135057s STEP: Saw pod success Apr 29 22:05:34.418: INFO: Pod "pod-projected-configmaps-0bc07981-9cc9-4a5d-ab3d-109622972be1" satisfied condition "Succeeded or Failed" Apr 29 22:05:34.421: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-0bc07981-9cc9-4a5d-ab3d-109622972be1 container agnhost-container: STEP: delete the pod Apr 29 22:05:34.435: INFO: Waiting for pod pod-projected-configmaps-0bc07981-9cc9-4a5d-ab3d-109622972be1 to disappear Apr 29 22:05:34.438: INFO: Pod pod-projected-configmaps-0bc07981-9cc9-4a5d-ab3d-109622972be1 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:34.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-354" for this suite. • [SLOW TEST:8.082 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":553,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:17.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 22:05:17.671: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 22:05:19.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866717, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866717, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866717, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866717, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:05:22.688: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:34.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9465" for this suite. STEP: Destroying namespace "webhook-9465-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.506 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":13,"skipped":217,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:34.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics Apr 29 22:05:35.537: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Apr 29 22:05:35.611: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:35.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4812" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":40,"skipped":562,"failed":0} SSSSSS ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":18,"skipped":226,"failed":0} [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:28.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 29 22:05:38.955: INFO: Successfully updated pod "adopt-release-jmqvb" STEP: Checking that the Job readopts the Pod Apr 29 22:05:38.955: INFO: Waiting up to 15m0s for pod "adopt-release-jmqvb" in namespace "job-5844" to be "adopted" Apr 29 22:05:38.959: INFO: Pod "adopt-release-jmqvb": Phase="Running", Reason="", readiness=true. Elapsed: 4.261605ms Apr 29 22:05:40.962: INFO: Pod "adopt-release-jmqvb": Phase="Running", Reason="", readiness=true. Elapsed: 2.007142715s Apr 29 22:05:40.962: INFO: Pod "adopt-release-jmqvb" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 29 22:05:41.473: INFO: Successfully updated pod "adopt-release-jmqvb" STEP: Checking that the Job releases the Pod Apr 29 22:05:41.473: INFO: Waiting up to 15m0s for pod "adopt-release-jmqvb" in namespace "job-5844" to be "released" Apr 29 22:05:41.476: INFO: Pod "adopt-release-jmqvb": Phase="Running", Reason="", readiness=true. Elapsed: 2.458957ms Apr 29 22:05:43.481: INFO: Pod "adopt-release-jmqvb": Phase="Running", Reason="", readiness=true. Elapsed: 2.00754865s Apr 29 22:05:43.481: INFO: Pod "adopt-release-jmqvb" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:43.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5844" for this suite. • [SLOW TEST:15.075 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":19,"skipped":226,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:21.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-ncll STEP: Creating a pod to test atomic-volume-subpath Apr 29 22:05:21.932: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ncll" in namespace "subpath-1224" to be "Succeeded or Failed" Apr 29 22:05:21.934: INFO: Pod "pod-subpath-test-downwardapi-ncll": Phase="Pending", Reason="", readiness=false. Elapsed: 1.971291ms Apr 29 22:05:23.967: INFO: Pod "pod-subpath-test-downwardapi-ncll": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034696998s Apr 29 22:05:25.970: INFO: Pod "pod-subpath-test-downwardapi-ncll": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037894329s Apr 29 22:05:27.974: INFO: Pod "pod-subpath-test-downwardapi-ncll": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041863221s Apr 29 22:05:29.978: INFO: Pod "pod-subpath-test-downwardapi-ncll": Phase="Running", Reason="", readiness=true. Elapsed: 8.045553209s Apr 29 22:05:31.982: INFO: Pod "pod-subpath-test-downwardapi-ncll": Phase="Running", Reason="", readiness=true. Elapsed: 10.049497051s Apr 29 22:05:34.010: INFO: Pod "pod-subpath-test-downwardapi-ncll": Phase="Running", Reason="", readiness=true. Elapsed: 12.077427801s Apr 29 22:05:36.015: INFO: Pod "pod-subpath-test-downwardapi-ncll": Phase="Running", Reason="", readiness=true. Elapsed: 14.082332421s Apr 29 22:05:38.019: INFO: Pod "pod-subpath-test-downwardapi-ncll": Phase="Running", Reason="", readiness=true. Elapsed: 16.086476298s Apr 29 22:05:40.023: INFO: Pod "pod-subpath-test-downwardapi-ncll": Phase="Running", Reason="", readiness=true. Elapsed: 18.090955231s Apr 29 22:05:42.028: INFO: Pod "pod-subpath-test-downwardapi-ncll": Phase="Running", Reason="", readiness=true. Elapsed: 20.095387396s Apr 29 22:05:44.047: INFO: Pod "pod-subpath-test-downwardapi-ncll": Phase="Running", Reason="", readiness=true. Elapsed: 22.114199656s Apr 29 22:05:46.050: INFO: Pod "pod-subpath-test-downwardapi-ncll": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.117312745s STEP: Saw pod success Apr 29 22:05:46.050: INFO: Pod "pod-subpath-test-downwardapi-ncll" satisfied condition "Succeeded or Failed" Apr 29 22:05:46.052: INFO: Trying to get logs from node node2 pod pod-subpath-test-downwardapi-ncll container test-container-subpath-downwardapi-ncll: STEP: delete the pod Apr 29 22:05:46.065: INFO: Waiting for pod pod-subpath-test-downwardapi-ncll to disappear Apr 29 22:05:46.067: INFO: Pod pod-subpath-test-downwardapi-ncll no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-ncll Apr 29 22:05:46.067: INFO: Deleting pod "pod-subpath-test-downwardapi-ncll" in namespace "subpath-1224" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:46.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1224" for this suite. • [SLOW TEST:24.178 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":128,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:43.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 22:05:44.160: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 22:05:46.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866744, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866744, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866744, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866744, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:05:49.184: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:49.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-53" for this suite. STEP: Destroying namespace "webhook-53-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.721 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":20,"skipped":230,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:12.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-3516 STEP: creating service affinity-clusterip-transition in namespace services-3516 STEP: creating replication controller affinity-clusterip-transition in namespace services-3516 I0429 22:05:12.301066 37 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-3516, replica count: 3 I0429 22:05:15.353217 37 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:05:18.354136 37 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:05:21.355402 37 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:05:24.357077 37 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:05:27.357321 37 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:05:30.360724 37 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 22:05:30.366: INFO: Creating new exec pod Apr 29 22:05:39.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3516 exec execpod-affinityzs7tj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 29 22:05:39.627: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Apr 29 22:05:39.627: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 29 22:05:39.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3516 exec execpod-affinityzs7tj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.44.120 80' Apr 29 22:05:39.871: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.44.120 80\nConnection to 10.233.44.120 80 port [tcp/http] succeeded!\n" Apr 29 22:05:39.871: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 29 22:05:39.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3516 exec execpod-affinityzs7tj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.44.120:80/ ; done' Apr 29 22:05:40.190: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n" Apr 29 22:05:40.190: INFO: stdout: "\naffinity-clusterip-transition-dzl2q\naffinity-clusterip-transition-dzl2q\naffinity-clusterip-transition-dzl2q\naffinity-clusterip-transition-5dgb2\naffinity-clusterip-transition-5dgb2\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-dzl2q\naffinity-clusterip-transition-5dgb2\naffinity-clusterip-transition-5dgb2\naffinity-clusterip-transition-5dgb2\naffinity-clusterip-transition-5dgb2\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-dzl2q\naffinity-clusterip-transition-5dgb2\naffinity-clusterip-transition-5dgb2" Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-dzl2q Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-dzl2q Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-dzl2q Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-5dgb2 Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-5dgb2 Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-dzl2q Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-5dgb2 Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-5dgb2 Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-5dgb2 Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-5dgb2 Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-dzl2q Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-5dgb2 Apr 29 22:05:40.190: INFO: Received response from host: affinity-clusterip-transition-5dgb2 Apr 29 22:05:40.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3516 exec execpod-affinityzs7tj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.44.120:80/ ; done' Apr 29 22:05:40.517: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.120:80/\n" Apr 29 22:05:40.517: INFO: stdout: "\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v\naffinity-clusterip-transition-fwx8v" Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Received response from host: affinity-clusterip-transition-fwx8v Apr 29 22:05:40.517: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-3516, will wait for the garbage collector to delete the pods Apr 29 22:05:40.582: INFO: Deleting ReplicationController affinity-clusterip-transition took: 4.089929ms Apr 29 22:05:40.683: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.613713ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:55.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3516" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:43.027 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:49.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created Apr 29 22:05:49.262: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:05:51.266: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:05:53.265: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 29 22:05:54.279: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:55.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5668" for this suite. • [SLOW TEST:6.074 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":633,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:55.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 22:05:55.341: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5889cdb7-f4f9-4f34-b5c7-086691da59d5" in namespace "projected-9877" to be "Succeeded or Failed" Apr 29 22:05:55.345: INFO: Pod "downwardapi-volume-5889cdb7-f4f9-4f34-b5c7-086691da59d5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.807059ms Apr 29 22:05:57.348: INFO: Pod "downwardapi-volume-5889cdb7-f4f9-4f34-b5c7-086691da59d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00680632s Apr 29 22:05:59.353: INFO: Pod "downwardapi-volume-5889cdb7-f4f9-4f34-b5c7-086691da59d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011263544s STEP: Saw pod success Apr 29 22:05:59.353: INFO: Pod "downwardapi-volume-5889cdb7-f4f9-4f34-b5c7-086691da59d5" satisfied condition "Succeeded or Failed" Apr 29 22:05:59.358: INFO: Trying to get logs from node node2 pod downwardapi-volume-5889cdb7-f4f9-4f34-b5c7-086691da59d5 container client-container: STEP: delete the pod Apr 29 22:05:59.370: INFO: Waiting for pod downwardapi-volume-5889cdb7-f4f9-4f34-b5c7-086691da59d5 to disappear Apr 29 22:05:59.371: INFO: Pod downwardapi-volume-5889cdb7-f4f9-4f34-b5c7-086691da59d5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:59.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9877" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":634,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:46.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:05:46.141: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 29 22:05:54.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6232 --namespace=crd-publish-openapi-6232 create -f -' Apr 29 22:05:54.729: INFO: stderr: "" Apr 29 22:05:54.729: INFO: stdout: "e2e-test-crd-publish-openapi-247-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 29 22:05:54.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6232 --namespace=crd-publish-openapi-6232 delete e2e-test-crd-publish-openapi-247-crds test-cr' Apr 29 22:05:54.906: INFO: stderr: "" Apr 29 22:05:54.907: INFO: stdout: "e2e-test-crd-publish-openapi-247-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 29 22:05:54.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6232 --namespace=crd-publish-openapi-6232 apply -f -' Apr 29 22:05:55.269: INFO: stderr: "" Apr 29 22:05:55.269: INFO: stdout: "e2e-test-crd-publish-openapi-247-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 29 22:05:55.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6232 --namespace=crd-publish-openapi-6232 delete e2e-test-crd-publish-openapi-247-crds test-cr' Apr 29 22:05:55.437: INFO: stderr: "" Apr 29 22:05:55.437: INFO: stdout: "e2e-test-crd-publish-openapi-247-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 29 22:05:55.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6232 explain e2e-test-crd-publish-openapi-247-crds' Apr 29 22:05:55.779: INFO: stderr: "" Apr 29 22:05:55.779: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-247-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:05:59.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6232" for this suite. • [SLOW TEST:13.277 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":9,"skipped":149,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:35.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-4285 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 29 22:05:35.654: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 29 22:05:35.685: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:05:37.688: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:05:39.689: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:05:41.689: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:05:43.689: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:05:45.688: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:05:47.689: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:05:49.690: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:05:51.688: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:05:53.688: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:05:55.688: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 29 22:05:55.693: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 29 22:05:57.695: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 29 22:06:01.730: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Apr 29 22:06:01.730: INFO: Going to poll 10.244.3.209 on port 8080 at least 0 times, with a maximum of 34 tries before failing Apr 29 22:06:01.732: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.209:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4285 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 22:06:01.732: INFO: >>> kubeConfig: /root/.kube/config Apr 29 22:06:01.827: INFO: Found all 1 expected endpoints: [netserver-0] Apr 29 22:06:01.827: INFO: Going to poll 10.244.4.187 on port 8080 at least 0 times, with a maximum of 34 tries before failing Apr 29 22:06:01.830: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.187:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4285 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 22:06:01.830: INFO: >>> kubeConfig: /root/.kube/config Apr 29 22:06:01.949: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:01.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4285" for this suite. • [SLOW TEST:26.323 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":568,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:59.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-4aff6d4b-3806-4095-86e4-afb2cfbe63f7 STEP: Creating a pod to test consume configMaps Apr 29 22:05:59.424: INFO: Waiting up to 5m0s for pod "pod-configmaps-53f2d20e-abc4-4745-9c22-d83b56847963" in namespace "configmap-6163" to be "Succeeded or Failed" Apr 29 22:05:59.426: INFO: Pod "pod-configmaps-53f2d20e-abc4-4745-9c22-d83b56847963": Phase="Pending", Reason="", readiness=false. Elapsed: 1.911589ms Apr 29 22:06:01.429: INFO: Pod "pod-configmaps-53f2d20e-abc4-4745-9c22-d83b56847963": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004803612s Apr 29 22:06:03.432: INFO: Pod "pod-configmaps-53f2d20e-abc4-4745-9c22-d83b56847963": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007834169s STEP: Saw pod success Apr 29 22:06:03.432: INFO: Pod "pod-configmaps-53f2d20e-abc4-4745-9c22-d83b56847963" satisfied condition "Succeeded or Failed" Apr 29 22:06:03.435: INFO: Trying to get logs from node node2 pod pod-configmaps-53f2d20e-abc4-4745-9c22-d83b56847963 container agnhost-container: STEP: delete the pod Apr 29 22:06:03.451: INFO: Waiting for pod pod-configmaps-53f2d20e-abc4-4745-9c22-d83b56847963 to disappear Apr 29 22:06:03.453: INFO: Pod pod-configmaps-53f2d20e-abc4-4745-9c22-d83b56847963 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:03.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6163" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":636,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:59.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 29 22:05:59.461: INFO: Waiting up to 5m0s for pod "security-context-843d97ee-3549-4ede-bb9c-418e9e191f12" in namespace "security-context-9765" to be "Succeeded or Failed" Apr 29 22:05:59.464: INFO: Pod "security-context-843d97ee-3549-4ede-bb9c-418e9e191f12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.703695ms Apr 29 22:06:01.468: INFO: Pod "security-context-843d97ee-3549-4ede-bb9c-418e9e191f12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00646181s Apr 29 22:06:03.471: INFO: Pod "security-context-843d97ee-3549-4ede-bb9c-418e9e191f12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009675729s STEP: Saw pod success Apr 29 22:06:03.471: INFO: Pod "security-context-843d97ee-3549-4ede-bb9c-418e9e191f12" satisfied condition "Succeeded or Failed" Apr 29 22:06:03.473: INFO: Trying to get logs from node node1 pod security-context-843d97ee-3549-4ede-bb9c-418e9e191f12 container test-container: STEP: delete the pod Apr 29 22:06:03.485: INFO: Waiting for pod security-context-843d97ee-3549-4ede-bb9c-418e9e191f12 to disappear Apr 29 22:06:03.487: INFO: Pod security-context-843d97ee-3549-4ede-bb9c-418e9e191f12 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:03.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9765" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":171,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:01.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:06:02.007: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 29 22:06:07.013: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 29 22:06:07.014: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Apr 29 22:06:07.033: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1215 ab9cd409-ab8c-42aa-add0-43dcf8be7963 46394 1 2022-04-29 22:06:07 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2022-04-29 22:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0048056d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 29 22:06:07.040: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-1215 71a0e59a-0aa8-46a3-81e8-888ce7fc373b 46396 1 2022-04-29 22:06:07 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment ab9cd409-ab8c-42aa-add0-43dcf8be7963 0xc004805b27 0xc004805b28}] [] [{kube-controller-manager Update apps/v1 2022-04-29 22:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab9cd409-ab8c-42aa-add0-43dcf8be7963\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004805bb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 29 22:06:07.040: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 29 22:06:07.040: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1215 bdaaf70b-30d4-476b-af21-5f30e1683be4 46395 1 2022-04-29 22:06:02 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment ab9cd409-ab8c-42aa-add0-43dcf8be7963 0xc004805a17 0xc004805a18}] [] [{e2e.test Update apps/v1 2022-04-29 22:06:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-29 22:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"ab9cd409-ab8c-42aa-add0-43dcf8be7963\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004805ab8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 29 22:06:07.048: INFO: Pod "test-cleanup-controller-kj6x8" is available: &Pod{ObjectMeta:{test-cleanup-controller-kj6x8 test-cleanup-controller- deployment-1215 4059f35c-fa73-4160-bda0-9a5da3e2d6ef 46352 0 2022-04-29 22:06:02 +0000 UTC map[name:cleanup-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.193" ], "mac": "46:35:cd:8e:84:da", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.193" ], "mac": "46:35:cd:8e:84:da", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-controller bdaaf70b-30d4-476b-af21-5f30e1683be4 0xc004a96167 0xc004a96168}] [] [{kube-controller-manager Update v1 2022-04-29 22:06:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdaaf70b-30d4-476b-af21-5f30e1683be4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-04-29 22:06:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-04-29 22:06:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.193\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mrq8v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mrq8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:06:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:06:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:06:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:06:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.193,StartTime:2022-04-29 22:06:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 22:06:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://de28b5e949e4a69535eb82e3dd809d259c4b4d683172d679feecd1ebaecf182e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.193,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:06:07.048: INFO: Pod "test-cleanup-deployment-5b4d99b59b-zhzd5" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-zhzd5 test-cleanup-deployment-5b4d99b59b- deployment-1215 6bf01b90-32e2-47b3-bb2b-e033127a3856 46399 0 2022-04-29 22:06:07 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b 71a0e59a-0aa8-46a3-81e8-888ce7fc373b 0xc004a9635f 0xc004a96370}] [] [{kube-controller-manager Update v1 2022-04-29 22:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71a0e59a-0aa8-46a3-81e8-888ce7fc373b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qznrn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qznrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:07.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1215" for this suite. • [SLOW TEST:5.077 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":42,"skipped":580,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:07.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 22:06:07.443: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 22:06:09.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866767, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866767, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866767, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866767, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:06:12.464: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:22.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-857" for this suite. STEP: Destroying namespace "webhook-857-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.538 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":43,"skipped":582,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:22.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 29 22:06:22.636: INFO: Waiting up to 5m0s for pod "pod-c306255e-fad9-4b6d-ab30-e287615d6718" in namespace "emptydir-9511" to be "Succeeded or Failed" Apr 29 22:06:22.639: INFO: Pod "pod-c306255e-fad9-4b6d-ab30-e287615d6718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15814ms Apr 29 22:06:24.643: INFO: Pod "pod-c306255e-fad9-4b6d-ab30-e287615d6718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006582182s Apr 29 22:06:26.649: INFO: Pod "pod-c306255e-fad9-4b6d-ab30-e287615d6718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012567158s STEP: Saw pod success Apr 29 22:06:26.649: INFO: Pod "pod-c306255e-fad9-4b6d-ab30-e287615d6718" satisfied condition "Succeeded or Failed" Apr 29 22:06:26.651: INFO: Trying to get logs from node node2 pod pod-c306255e-fad9-4b6d-ab30-e287615d6718 container test-container: STEP: delete the pod Apr 29 22:06:26.666: INFO: Waiting for pod pod-c306255e-fad9-4b6d-ab30-e287615d6718 to disappear Apr 29 22:06:26.667: INFO: Pod pod-c306255e-fad9-4b6d-ab30-e287615d6718 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:26.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9511" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":583,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:26.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:06:26.734: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Apr 29 22:06:26.752: INFO: The status of Pod pod-exec-websocket-81ed890b-a736-456e-a334-fcd8cf859f8e is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:06:28.755: INFO: The status of Pod pod-exec-websocket-81ed890b-a736-456e-a334-fcd8cf859f8e is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:06:30.756: INFO: The status of Pod pod-exec-websocket-81ed890b-a736-456e-a334-fcd8cf859f8e is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:30.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9622" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":600,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:01:31.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0429 22:01:31.249318 31 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:31.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-226" for this suite. • [SLOW TEST:300.042 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":14,"skipped":280,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:31.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:31.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2496" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":15,"skipped":289,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:03.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-821 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 29 22:06:03.556: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 29 22:06:03.702: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:06:05.706: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:06:07.707: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:06:09.706: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:06:11.706: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:06:13.707: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:06:15.705: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:06:17.706: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:06:19.708: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:06:21.706: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 22:06:23.707: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 29 22:06:23.711: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 29 22:06:25.715: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 29 22:06:29.752: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Apr 29 22:06:29.752: INFO: Going to poll 10.244.3.214 on port 8081 at least 0 times, with a maximum of 34 tries before failing Apr 29 22:06:29.755: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.214 8081 | grep -v '^\s*$'] Namespace:pod-network-test-821 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 22:06:29.755: INFO: >>> kubeConfig: /root/.kube/config Apr 29 22:06:30.835: INFO: Found all 1 expected endpoints: [netserver-0] Apr 29 22:06:30.835: INFO: Going to poll 10.244.4.195 on port 8081 at least 0 times, with a maximum of 34 tries before failing Apr 29 22:06:30.837: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.195 8081 | grep -v '^\s*$'] Namespace:pod-network-test-821 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 22:06:30.837: INFO: >>> kubeConfig: /root/.kube/config Apr 29 22:06:31.919: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:31.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-821" for this suite. • [SLOW TEST:28.389 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":677,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:32.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 29 22:06:32.055: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6384 e9e970b4-26a1-4537-bc41-5cc38910ff59 46800 0 2022-04-29 22:06:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-29 22:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 22:06:32.056: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6384 e9e970b4-26a1-4537-bc41-5cc38910ff59 46801 0 2022-04-29 22:06:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-29 22:06:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 29 22:06:32.065: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6384 e9e970b4-26a1-4537-bc41-5cc38910ff59 46802 0 2022-04-29 22:06:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-29 22:06:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 22:06:32.066: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6384 e9e970b4-26a1-4537-bc41-5cc38910ff59 46803 0 2022-04-29 22:06:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-29 22:06:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:32.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6384" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":34,"skipped":723,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:32.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-d21a4e24-673f-4864-a67d-5612892689c8 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:32.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-356" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":35,"skipped":734,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:31.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Apr 29 22:06:31.359: INFO: Waiting up to 5m0s for pod "downward-api-bf7d7e77-3517-4303-b4cc-22e17ce53da8" in namespace "downward-api-9154" to be "Succeeded or Failed" Apr 29 22:06:31.361: INFO: Pod "downward-api-bf7d7e77-3517-4303-b4cc-22e17ce53da8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.789653ms Apr 29 22:06:33.366: INFO: Pod "downward-api-bf7d7e77-3517-4303-b4cc-22e17ce53da8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007178986s Apr 29 22:06:35.372: INFO: Pod "downward-api-bf7d7e77-3517-4303-b4cc-22e17ce53da8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012932668s STEP: Saw pod success Apr 29 22:06:35.372: INFO: Pod "downward-api-bf7d7e77-3517-4303-b4cc-22e17ce53da8" satisfied condition "Succeeded or Failed" Apr 29 22:06:35.375: INFO: Trying to get logs from node node1 pod downward-api-bf7d7e77-3517-4303-b4cc-22e17ce53da8 container dapi-container: STEP: delete the pod Apr 29 22:06:35.390: INFO: Waiting for pod downward-api-bf7d7e77-3517-4303-b4cc-22e17ce53da8 to disappear Apr 29 22:06:35.392: INFO: Pod downward-api-bf7d7e77-3517-4303-b4cc-22e17ce53da8 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:35.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9154" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":296,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:32.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:06:32.216: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"74322cfb-0c9a-47c2-940b-d74c6bcd3d8e", Controller:(*bool)(0xc0008257d2), BlockOwnerDeletion:(*bool)(0xc0008257d3)}} Apr 29 22:06:32.221: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6a432a5c-69c7-4b02-b216-2fa4dc7482e9", Controller:(*bool)(0xc0003e028a), BlockOwnerDeletion:(*bool)(0xc0003e028b)}} Apr 29 22:06:32.225: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"770df249-3b0c-4971-8805-be8f4324e40e", Controller:(*bool)(0xc0003e094a), BlockOwnerDeletion:(*bool)(0xc0003e094b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:37.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-549" for this suite. • [SLOW TEST:5.086 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":36,"skipped":750,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:35.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:06:35.483: INFO: The status of Pod busybox-host-aliases470988a3-0c1c-434a-ad93-3e96e5da324c is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:06:37.485: INFO: The status of Pod busybox-host-aliases470988a3-0c1c-434a-ad93-3e96e5da324c is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:06:39.488: INFO: The status of Pod busybox-host-aliases470988a3-0c1c-434a-ad93-3e96e5da324c is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:39.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-916" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":321,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:39.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:06:39.546: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-6a4a7b6e-aa3e-4d43-b4ee-8555341e7e62" in namespace "security-context-test-5293" to be "Succeeded or Failed" Apr 29 22:06:39.548: INFO: Pod "busybox-privileged-false-6a4a7b6e-aa3e-4d43-b4ee-8555341e7e62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.510037ms Apr 29 22:06:41.551: INFO: Pod "busybox-privileged-false-6a4a7b6e-aa3e-4d43-b4ee-8555341e7e62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005643124s Apr 29 22:06:43.558: INFO: Pod "busybox-privileged-false-6a4a7b6e-aa3e-4d43-b4ee-8555341e7e62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012328531s Apr 29 22:06:43.558: INFO: Pod "busybox-privileged-false-6a4a7b6e-aa3e-4d43-b4ee-8555341e7e62" satisfied condition "Succeeded or Failed" Apr 29 22:06:43.564: INFO: Got logs for pod "busybox-privileged-false-6a4a7b6e-aa3e-4d43-b4ee-8555341e7e62": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:43.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5293" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":324,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:43.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:43.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-7837" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":19,"skipped":342,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:11.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-a6dab0d4-ebf9-4167-9f4c-ac88c3378bca STEP: Creating configMap with name cm-test-opt-upd-6f53b327-c4c0-4bc9-81de-a5008e26470c STEP: Creating the pod Apr 29 22:05:11.274: INFO: The status of Pod pod-configmaps-3c1a2655-b766-4e1e-92c7-fee07b2e58af is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:05:13.280: INFO: The status of Pod pod-configmaps-3c1a2655-b766-4e1e-92c7-fee07b2e58af is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:05:15.278: INFO: The status of Pod pod-configmaps-3c1a2655-b766-4e1e-92c7-fee07b2e58af is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:05:17.279: INFO: The status of Pod pod-configmaps-3c1a2655-b766-4e1e-92c7-fee07b2e58af is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:05:19.283: INFO: The status of Pod pod-configmaps-3c1a2655-b766-4e1e-92c7-fee07b2e58af is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-a6dab0d4-ebf9-4167-9f4c-ac88c3378bca STEP: Updating configmap cm-test-opt-upd-6f53b327-c4c0-4bc9-81de-a5008e26470c STEP: Creating configMap with name cm-test-opt-create-a92e2b80-b750-4367-8438-c176575a4396 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:44.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-754" for this suite. • [SLOW TEST:92.947 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":596,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:30.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 22:06:31.370: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 22:06:33.381: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866791, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866791, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866791, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866791, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:06:36.395: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:06:36.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4953-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:44.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7859" for this suite. STEP: Destroying namespace "webhook-7859-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.574 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":46,"skipped":622,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:03.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-2986 Apr 29 22:04:03.213: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:05.217: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:07.217: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:09.217: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:04:11.217: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Apr 29 22:04:11.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Apr 29 22:04:11.478: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Apr 29 22:04:11.478: INFO: stdout: "iptables" Apr 29 22:04:11.478: INFO: proxyMode: iptables Apr 29 22:04:11.486: INFO: Waiting for pod kube-proxy-mode-detector to disappear Apr 29 22:04:11.488: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-2986 STEP: creating replication controller affinity-nodeport-timeout in namespace services-2986 I0429 22:04:11.504443 32 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-2986, replica count: 3 I0429 22:04:14.556126 32 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:04:17.556597 32 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:04:20.557023 32 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 22:04:20.564: INFO: Creating new exec pod Apr 29 22:04:29.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 29 22:04:30.141: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 29 22:04:30.141: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 29 22:04:30.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.24.30 80' Apr 29 22:04:30.750: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.24.30 80\nConnection to 10.233.24.30 80 port [tcp/http] succeeded!\n" Apr 29 22:04:30.750: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 29 22:04:30.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:31.372: INFO: rc: 1 Apr 29 22:04:31.372: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:32.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:32.620: INFO: rc: 1 Apr 29 22:04:32.620: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:33.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:33.636: INFO: rc: 1 Apr 29 22:04:33.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:34.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:34.645: INFO: rc: 1 Apr 29 22:04:34.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:35.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:35.883: INFO: rc: 1 Apr 29 22:04:35.883: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:36.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:36.625: INFO: rc: 1 Apr 29 22:04:36.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:37.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:37.614: INFO: rc: 1 Apr 29 22:04:37.614: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:38.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:38.623: INFO: rc: 1 Apr 29 22:04:38.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:39.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:39.909: INFO: rc: 1 Apr 29 22:04:39.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:40.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:40.727: INFO: rc: 1 Apr 29 22:04:40.727: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:41.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:42.293: INFO: rc: 1 Apr 29 22:04:42.293: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:42.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:43.090: INFO: rc: 1 Apr 29 22:04:43.090: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:43.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:43.726: INFO: rc: 1 Apr 29 22:04:43.727: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:44.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:44.867: INFO: rc: 1 Apr 29 22:04:44.867: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:45.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:45.655: INFO: rc: 1 Apr 29 22:04:45.655: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:46.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:46.790: INFO: rc: 1 Apr 29 22:04:46.790: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:47.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:47.761: INFO: rc: 1 Apr 29 22:04:47.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:48.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:48.947: INFO: rc: 1 Apr 29 22:04:48.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:49.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:49.913: INFO: rc: 1 Apr 29 22:04:49.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:50.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:50.952: INFO: rc: 1 Apr 29 22:04:50.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:51.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:51.781: INFO: rc: 1 Apr 29 22:04:51.781: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:52.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:52.747: INFO: rc: 1 Apr 29 22:04:52.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30972 + echo hostName nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:53.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:53.649: INFO: rc: 1 Apr 29 22:04:53.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:54.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:54.712: INFO: rc: 1 Apr 29 22:04:54.712: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:55.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:56.386: INFO: rc: 1 Apr 29 22:04:56.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:57.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:57.628: INFO: rc: 1 Apr 29 22:04:57.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:58.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:58.698: INFO: rc: 1 Apr 29 22:04:58.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:04:59.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:04:59.649: INFO: rc: 1 Apr 29 22:04:59.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:00.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:00.732: INFO: rc: 1 Apr 29 22:05:00.732: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:01.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:01.688: INFO: rc: 1 Apr 29 22:05:01.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:02.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:02.715: INFO: rc: 1 Apr 29 22:05:02.715: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30972 + echo hostName nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:03.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:03.618: INFO: rc: 1 Apr 29 22:05:03.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:04.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:04.646: INFO: rc: 1 Apr 29 22:05:04.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:05.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:05.597: INFO: rc: 1 Apr 29 22:05:05.597: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo+ hostNamenc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:06.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:06.764: INFO: rc: 1 Apr 29 22:05:06.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:07.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:07.653: INFO: rc: 1 Apr 29 22:05:07.653: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:08.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:08.874: INFO: rc: 1 Apr 29 22:05:08.874: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:09.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:09.633: INFO: rc: 1 Apr 29 22:05:09.633: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:10.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:10.600: INFO: rc: 1 Apr 29 22:05:10.600: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30972 + echo hostName nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:11.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:11.705: INFO: rc: 1 Apr 29 22:05:11.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:12.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:12.831: INFO: rc: 1 Apr 29 22:05:12.831: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:13.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:13.665: INFO: rc: 1 Apr 29 22:05:13.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:14.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:14.742: INFO: rc: 1 Apr 29 22:05:14.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:15.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:16.582: INFO: rc: 1 Apr 29 22:05:16.582: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:17.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:18.128: INFO: rc: 1 Apr 29 22:05:18.128: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:18.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:18.659: INFO: rc: 1 Apr 29 22:05:18.659: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:19.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:19.627: INFO: rc: 1 Apr 29 22:05:19.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:20.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:20.726: INFO: rc: 1 Apr 29 22:05:20.726: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:21.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:21.586: INFO: rc: 1 Apr 29 22:05:21.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:22.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:22.645: INFO: rc: 1 Apr 29 22:05:22.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:23.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:23.870: INFO: rc: 1 Apr 29 22:05:23.870: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:24.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:24.614: INFO: rc: 1 Apr 29 22:05:24.614: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:25.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:26.220: INFO: rc: 1 Apr 29 22:05:26.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:26.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:26.758: INFO: rc: 1 Apr 29 22:05:26.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:27.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:27.596: INFO: rc: 1 Apr 29 22:05:27.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:28.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:28.719: INFO: rc: 1 Apr 29 22:05:28.719: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:29.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:29.815: INFO: rc: 1 Apr 29 22:05:29.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:30.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:30.623: INFO: rc: 1 Apr 29 22:05:30.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:31.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:31.738: INFO: rc: 1 Apr 29 22:05:31.738: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:32.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:32.667: INFO: rc: 1 Apr 29 22:05:32.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:33.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:33.625: INFO: rc: 1 Apr 29 22:05:33.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:34.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:34.617: INFO: rc: 1 Apr 29 22:05:34.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:35.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:35.625: INFO: rc: 1 Apr 29 22:05:35.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:36.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:36.608: INFO: rc: 1 Apr 29 22:05:36.608: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:37.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:37.651: INFO: rc: 1 Apr 29 22:05:37.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:38.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:38.630: INFO: rc: 1 Apr 29 22:05:38.630: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:39.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:39.626: INFO: rc: 1 Apr 29 22:05:39.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:40.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:40.629: INFO: rc: 1 Apr 29 22:05:40.629: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:41.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:41.625: INFO: rc: 1 Apr 29 22:05:41.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:42.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:42.903: INFO: rc: 1 Apr 29 22:05:42.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30972 + echo hostName nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:43.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:43.755: INFO: rc: 1 Apr 29 22:05:43.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:44.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:44.637: INFO: rc: 1 Apr 29 22:05:44.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:45.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:45.662: INFO: rc: 1 Apr 29 22:05:45.662: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:46.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:46.618: INFO: rc: 1 Apr 29 22:05:46.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:47.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:47.611: INFO: rc: 1 Apr 29 22:05:47.611: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:48.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:48.616: INFO: rc: 1 Apr 29 22:05:48.616: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:49.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:49.625: INFO: rc: 1 Apr 29 22:05:49.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo+ hostName nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:50.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:50.825: INFO: rc: 1 Apr 29 22:05:50.825: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:51.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:51.601: INFO: rc: 1 Apr 29 22:05:51.601: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:52.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:52.611: INFO: rc: 1 Apr 29 22:05:52.611: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:53.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:53.595: INFO: rc: 1 Apr 29 22:05:53.595: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:54.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:54.619: INFO: rc: 1 Apr 29 22:05:54.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:55.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:55.819: INFO: rc: 1 Apr 29 22:05:55.820: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:56.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:56.989: INFO: rc: 1 Apr 29 22:05:56.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:57.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:57.720: INFO: rc: 1 Apr 29 22:05:57.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30972 + echo hostName nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:58.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:58.622: INFO: rc: 1 Apr 29 22:05:58.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:05:59.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:05:59.626: INFO: rc: 1 Apr 29 22:05:59.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:00.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:00.999: INFO: rc: 1 Apr 29 22:06:01.000: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:01.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:01.680: INFO: rc: 1 Apr 29 22:06:01.680: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:02.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:02.707: INFO: rc: 1 Apr 29 22:06:02.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:03.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:03.918: INFO: rc: 1 Apr 29 22:06:03.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:04.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:04.753: INFO: rc: 1 Apr 29 22:06:04.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:05.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:05.591: INFO: rc: 1 Apr 29 22:06:05.591: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:06.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:06.652: INFO: rc: 1 Apr 29 22:06:06.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:07.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:07.645: INFO: rc: 1 Apr 29 22:06:07.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:08.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:08.631: INFO: rc: 1 Apr 29 22:06:08.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:09.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:09.607: INFO: rc: 1 Apr 29 22:06:09.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:10.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:10.614: INFO: rc: 1 Apr 29 22:06:10.614: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:11.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:11.612: INFO: rc: 1 Apr 29 22:06:11.612: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:12.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:12.651: INFO: rc: 1 Apr 29 22:06:12.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:13.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:13.605: INFO: rc: 1 Apr 29 22:06:13.605: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:14.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:14.606: INFO: rc: 1 Apr 29 22:06:14.606: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:15.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:15.619: INFO: rc: 1 Apr 29 22:06:15.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:16.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:16.620: INFO: rc: 1 Apr 29 22:06:16.620: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30972 + echo hostName nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:17.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:17.610: INFO: rc: 1 Apr 29 22:06:17.610: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:18.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:18.637: INFO: rc: 1 Apr 29 22:06:18.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:19.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:19.680: INFO: rc: 1 Apr 29 22:06:19.680: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:20.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:20.622: INFO: rc: 1 Apr 29 22:06:20.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:21.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:21.601: INFO: rc: 1 Apr 29 22:06:21.601: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:22.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:22.624: INFO: rc: 1 Apr 29 22:06:22.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:23.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:23.968: INFO: rc: 1 Apr 29 22:06:23.968: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:24.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:24.626: INFO: rc: 1 Apr 29 22:06:24.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:25.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:26.699: INFO: rc: 1 Apr 29 22:06:26.699: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:27.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:27.649: INFO: rc: 1 Apr 29 22:06:27.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:28.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:28.777: INFO: rc: 1 Apr 29 22:06:28.777: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30972 + echo hostName nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:29.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:29.625: INFO: rc: 1 Apr 29 22:06:29.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:30.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:30.618: INFO: rc: 1 Apr 29 22:06:30.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:31.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:31.617: INFO: rc: 1 Apr 29 22:06:31.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:31.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972' Apr 29 22:06:31.859: INFO: rc: 1 Apr 29 22:06:31.859: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2986 exec execpod-affinity56qnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30972: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30972 nc: connect to 10.10.190.207 port 30972 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:31.860: FAIL: Unexpected error: <*errors.errorString | 0xc0040a2eb0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30972 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30972 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc00117a9a0, 0x77b33d8, 0xc0035e3600, 0xc001497180) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001800a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001800a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001800a80, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Apr 29 22:06:31.861: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-2986, will wait for the garbage collector to delete the pods Apr 29 22:06:31.923: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 4.119071ms Apr 29 22:06:32.024: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.82989ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-2986". STEP: Found 33 events. Apr 29 22:06:45.239: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-96nph: { } Scheduled: Successfully assigned services-2986/affinity-nodeport-timeout-96nph to node2 Apr 29 22:06:45.239: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-cqgrs: { } Scheduled: Successfully assigned services-2986/affinity-nodeport-timeout-cqgrs to node1 Apr 29 22:06:45.239: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-hcgxk: { } Scheduled: Successfully assigned services-2986/affinity-nodeport-timeout-hcgxk to node1 Apr 29 22:06:45.239: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinity56qnn: { } Scheduled: Successfully assigned services-2986/execpod-affinity56qnn to node2 Apr 29 22:06:45.239: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-mode-detector: { } Scheduled: Successfully assigned services-2986/kube-proxy-mode-detector to node2 Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:05 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 814.416282ms Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:05 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:06 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Created: Created container agnhost-container Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:06 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Started: Started container agnhost-container Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:11 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-hcgxk Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:11 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-cqgrs Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:11 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-96nph Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:11 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Killing: Stopping container agnhost-container Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:13 +0000 UTC - event for affinity-nodeport-timeout-cqgrs: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 365.905602ms Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:13 +0000 UTC - event for affinity-nodeport-timeout-cqgrs: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:13 +0000 UTC - event for affinity-nodeport-timeout-cqgrs: {kubelet node1} Started: Started container affinity-nodeport-timeout Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:13 +0000 UTC - event for affinity-nodeport-timeout-cqgrs: {kubelet node1} Created: Created container affinity-nodeport-timeout Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:13 +0000 UTC - event for affinity-nodeport-timeout-hcgxk: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:14 +0000 UTC - event for affinity-nodeport-timeout-hcgxk: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 286.753356ms Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:14 +0000 UTC - event for affinity-nodeport-timeout-hcgxk: {kubelet node1} Created: Created container affinity-nodeport-timeout Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:14 +0000 UTC - event for affinity-nodeport-timeout-hcgxk: {kubelet node1} Started: Started container affinity-nodeport-timeout Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:15 +0000 UTC - event for affinity-nodeport-timeout-96nph: {kubelet node2} Started: Started container affinity-nodeport-timeout Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:15 +0000 UTC - event for affinity-nodeport-timeout-96nph: {kubelet node2} Created: Created container affinity-nodeport-timeout Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:15 +0000 UTC - event for affinity-nodeport-timeout-96nph: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 249.548246ms Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:15 +0000 UTC - event for affinity-nodeport-timeout-96nph: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:22 +0000 UTC - event for execpod-affinity56qnn: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 288.616549ms Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:22 +0000 UTC - event for execpod-affinity56qnn: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:22 +0000 UTC - event for execpod-affinity56qnn: {kubelet node2} Created: Created container agnhost-container Apr 29 22:06:45.239: INFO: At 2022-04-29 22:04:22 +0000 UTC - event for execpod-affinity56qnn: {kubelet node2} Started: Started container agnhost-container Apr 29 22:06:45.239: INFO: At 2022-04-29 22:06:31 +0000 UTC - event for affinity-nodeport-timeout-96nph: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Apr 29 22:06:45.240: INFO: At 2022-04-29 22:06:31 +0000 UTC - event for affinity-nodeport-timeout-cqgrs: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout Apr 29 22:06:45.240: INFO: At 2022-04-29 22:06:31 +0000 UTC - event for affinity-nodeport-timeout-hcgxk: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout Apr 29 22:06:45.240: INFO: At 2022-04-29 22:06:31 +0000 UTC - event for execpod-affinity56qnn: {kubelet node2} Killing: Stopping container agnhost-container Apr 29 22:06:45.242: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:06:45.242: INFO: Apr 29 22:06:45.246: INFO: Logging node info for node master1 Apr 29 22:06:45.248: INFO: Node Info: &Node{ObjectMeta:{master1 c968c2e7-7594-4f6e-b85d-932008e8124f 46889 0 2022-04-29 19:57:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:57:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:05:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-04-29 20:08:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:35 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:35 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:35 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:06:35 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c3419fad4d2d4c5c9574e5b11ef92b4b,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:5e0f934f-c777-4827-ade6-efec15a825ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:06:45.249: INFO: Logging kubelet events for node master1 Apr 29 22:06:45.251: INFO: Logging pods the kubelet thinks is on node master1 Apr 29 22:06:45.264: INFO: coredns-8474476ff8-59qm6 started at 2022-04-29 20:00:39 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.264: INFO: Container coredns ready: true, restart count 1 Apr 29 22:06:45.264: INFO: container-registry-65d7c44b96-np5nk started at 2022-04-29 20:04:54 +0000 UTC (0+2 container statuses recorded) Apr 29 22:06:45.264: INFO: Container docker-registry ready: true, restart count 0 Apr 29 22:06:45.264: INFO: Container nginx ready: true, restart count 0 Apr 29 22:06:45.264: INFO: node-exporter-svkqv started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:06:45.264: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:06:45.264: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:06:45.264: INFO: kube-apiserver-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.264: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:06:45.264: INFO: kube-controller-manager-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.264: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 29 22:06:45.264: INFO: kube-scheduler-master1 started at 2022-04-29 20:16:35 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.265: INFO: Container kube-scheduler ready: true, restart count 1 Apr 29 22:06:45.265: INFO: kube-proxy-9s46x started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.265: INFO: Container kube-proxy ready: true, restart count 1 Apr 29 22:06:45.265: INFO: kube-flannel-cskzh started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:06:45.265: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:06:45.265: INFO: Container kube-flannel ready: true, restart count 1 Apr 29 22:06:45.265: INFO: kube-multus-ds-amd64-w54d6 started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.265: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:06:45.265: INFO: node-feature-discovery-controller-cff799f9f-zpv5m started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.265: INFO: Container nfd-controller ready: true, restart count 0 Apr 29 22:06:45.352: INFO: Latency metrics for node master1 Apr 29 22:06:45.352: INFO: Logging node info for node master2 Apr 29 22:06:45.354: INFO: Node Info: &Node{ObjectMeta:{master2 5b362581-f2d5-419c-a0b0-3aad7bec82f9 46983 0 2022-04-29 19:57:49 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:37 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:37 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:37 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:06:37 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d055250c7e194b8a9a572c232266a800,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fb9f32a4-f021-45dd-bddf-6f1d5ae9abae,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:06:45.354: INFO: Logging kubelet events for node master2 Apr 29 22:06:45.357: INFO: Logging pods the kubelet thinks is on node master2 Apr 29 22:06:45.370: INFO: kube-multus-ds-amd64-txslv started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.370: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:06:45.370: INFO: kube-apiserver-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.370: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:06:45.370: INFO: kube-proxy-4dnjw started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.371: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:06:45.371: INFO: kube-flannel-q2wgv started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:06:45.371: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:06:45.371: INFO: Container kube-flannel ready: true, restart count 1 Apr 29 22:06:45.371: INFO: coredns-8474476ff8-bg2wr started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.371: INFO: Container coredns ready: true, restart count 2 Apr 29 22:06:45.371: INFO: prometheus-operator-585ccfb458-q8r6q started at 2022-04-29 20:13:20 +0000 UTC (0+2 container statuses recorded) Apr 29 22:06:45.371: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:06:45.371: INFO: Container prometheus-operator ready: true, restart count 0 Apr 29 22:06:45.371: INFO: node-exporter-9rgc2 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:06:45.371: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:06:45.371: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:06:45.371: INFO: kube-controller-manager-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.371: INFO: Container kube-controller-manager ready: true, restart count 1 Apr 29 22:06:45.371: INFO: kube-scheduler-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.371: INFO: Container kube-scheduler ready: true, restart count 3 Apr 29 22:06:45.371: INFO: dns-autoscaler-7df78bfcfb-csfp5 started at 2022-04-29 20:00:43 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.371: INFO: Container autoscaler ready: true, restart count 1 Apr 29 22:06:45.462: INFO: Latency metrics for node master2 Apr 29 22:06:45.462: INFO: Logging node info for node master3 Apr 29 22:06:45.464: INFO: Node Info: &Node{ObjectMeta:{master3 1096e515-b559-4c90-b0f7-3398537b5f9e 46985 0 2022-04-29 19:58:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:58:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:16 +0000 UTC,LastTransitionTime:2022-04-29 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:38 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:38 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:38 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:06:38 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8955b376e6314525a9e533e277f5f4fb,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:6ffefaf4-8a5c-4288-a6a9-78ef35aa67ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:06:45.465: INFO: Logging kubelet events for node master3 Apr 29 22:06:45.466: INFO: Logging pods the kubelet thinks is on node master3 Apr 29 22:06:45.474: INFO: node-exporter-gdq6v started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:06:45.474: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:06:45.474: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:06:45.474: INFO: kube-apiserver-master3 started at 2022-04-29 19:58:29 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.474: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:06:45.474: INFO: kube-controller-manager-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.474: INFO: Container kube-controller-manager ready: true, restart count 3 Apr 29 22:06:45.474: INFO: kube-scheduler-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.474: INFO: Container kube-scheduler ready: true, restart count 2 Apr 29 22:06:45.474: INFO: kube-proxy-gs7qh started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.474: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:06:45.474: INFO: kube-flannel-g8w9b started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:06:45.474: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:06:45.474: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:06:45.474: INFO: kube-multus-ds-amd64-lxrlj started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.474: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:06:45.547: INFO: Latency metrics for node master3 Apr 29 22:06:45.547: INFO: Logging node info for node node1 Apr 29 22:06:45.550: INFO: Node Info: &Node{ObjectMeta:{node1 6842a10e-614a-46f0-b405-bc18936b0017 47007 0 2022-04-29 19:59:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 20:11:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:02:57 +0000 UTC,LastTransitionTime:2022-04-29 20:02:57 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:40 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:40 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:40 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:06:40 +0000 UTC,LastTransitionTime:2022-04-29 20:00:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2a0958eb1b3044f2963c9e5f2e902173,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fc6a2d14-7726-4aec-9428-6617632ddcbe,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:06:45.550: INFO: Logging kubelet events for node node1 Apr 29 22:06:45.552: INFO: Logging pods the kubelet thinks is on node node1 Apr 29 22:06:45.567: INFO: kubernetes-dashboard-785dcbb76d-d2k5n started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.567: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 22:06:45.567: INFO: externalname-service-2qv2w started at 2022-04-29 22:06:03 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.567: INFO: Container externalname-service ready: true, restart count 0 Apr 29 22:06:45.567: INFO: pod-configmaps-3c1a2655-b766-4e1e-92c7-fee07b2e58af started at 2022-04-29 22:05:11 +0000 UTC (0+3 container statuses recorded) Apr 29 22:06:45.567: INFO: Container createcm-volume-test ready: true, restart count 0 Apr 29 22:06:45.567: INFO: Container delcm-volume-test ready: true, restart count 0 Apr 29 22:06:45.567: INFO: Container updcm-volume-test ready: true, restart count 0 Apr 29 22:06:45.567: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.567: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 22:06:45.567: INFO: busybox-host-aliases470988a3-0c1c-434a-ad93-3e96e5da324c started at 2022-04-29 22:06:35 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.567: INFO: Container busybox-host-aliases470988a3-0c1c-434a-ad93-3e96e5da324c ready: true, restart count 0 Apr 29 22:06:45.567: INFO: nginx-proxy-node1 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.567: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:06:45.567: INFO: cmk-f5znp started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded) Apr 29 22:06:45.567: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:06:45.567: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:06:45.567: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 started at 2022-04-29 20:16:34 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.567: INFO: Container tas-extender ready: true, restart count 0 Apr 29 22:06:45.567: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.567: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:06:45.567: INFO: cmk-init-discover-node1-gxlbt started at 2022-04-29 20:11:43 +0000 UTC (0+3 container statuses recorded) Apr 29 22:06:45.567: INFO: Container discover ready: false, restart count 0 Apr 29 22:06:45.567: INFO: Container init ready: false, restart count 0 Apr 29 22:06:45.567: INFO: Container install ready: false, restart count 0 Apr 29 22:06:45.567: INFO: prometheus-k8s-0 started at 2022-04-29 20:13:38 +0000 UTC (0+4 container statuses recorded) Apr 29 22:06:45.567: INFO: Container config-reloader ready: true, restart count 0 Apr 29 22:06:45.567: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 22:06:45.567: INFO: Container grafana ready: true, restart count 0 Apr 29 22:06:45.567: INFO: Container prometheus ready: true, restart count 1 Apr 29 22:06:45.567: INFO: kube-proxy-v9tgj started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.567: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:06:45.567: INFO: node-exporter-c8777 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:06:45.567: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:06:45.567: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:06:45.567: INFO: execpod6gh7z started at 2022-04-29 22:06:09 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.567: INFO: Container agnhost-container ready: true, restart count 0 Apr 29 22:06:45.567: INFO: kube-flannel-47phs started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:06:45.567: INFO: Init container install-cni ready: true, restart count 2 Apr 29 22:06:45.567: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:06:45.567: INFO: collectd-ccgw2 started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded) Apr 29 22:06:45.567: INFO: Container collectd ready: true, restart count 0 Apr 29 22:06:45.567: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:06:45.567: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:06:45.567: INFO: kube-multus-ds-amd64-kkz4q started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.567: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:06:45.567: INFO: var-expansion-b797e0f3-16a6-40da-a8a6-59c61b1f3c8d started at 2022-04-29 22:04:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.567: INFO: Container dapi-container ready: false, restart count 0 Apr 29 22:06:45.567: INFO: node-feature-discovery-worker-kbl9s started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.567: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:06:45.741: INFO: Latency metrics for node node1 Apr 29 22:06:45.741: INFO: Logging node info for node node2 Apr 29 22:06:45.744: INFO: Node Info: &Node{ObjectMeta:{node2 2f399869-e81b-465d-97b4-806b6186d34a 47109 0 2022-04-29 19:59:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:12:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 20:12:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:12 +0000 UTC,LastTransitionTime:2022-04-29 20:03:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:44 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:44 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:06:44 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:06:44 +0000 UTC,LastTransitionTime:2022-04-29 20:03:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:22c763056cc24e6ba6e8bbadb5113d3d,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:8ca050bd-5d8a-4c59-8e02-41e26864aa92,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:06:45.745: INFO: Logging kubelet events for node node2 Apr 29 22:06:45.747: INFO: Logging pods the kubelet thinks is on node node2 Apr 29 22:06:45.760: INFO: nginx-proxy-node2 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.760: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:06:45.760: INFO: node-exporter-tlpmt started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:06:45.760: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:06:45.760: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:06:45.760: INFO: busybox-user-65534-92350086-21f8-49ea-83eb-c26a193bacff started at 2022-04-29 22:06:44 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.760: INFO: Container busybox-user-65534-92350086-21f8-49ea-83eb-c26a193bacff ready: false, restart count 0 Apr 29 22:06:45.760: INFO: kube-proxy-k6tv2 started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.760: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:06:45.760: INFO: kube-flannel-dbcj8 started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:06:45.760: INFO: Init container install-cni ready: true, restart count 2 Apr 29 22:06:45.760: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 22:06:45.760: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.760: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:06:45.760: INFO: cmk-74bh9 started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded) Apr 29 22:06:45.760: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:06:45.760: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:06:45.760: INFO: pod-init-dfa50435-051e-4eb8-96e8-ed91a958dba2 started at 2022-04-29 22:06:37 +0000 UTC (2+1 container statuses recorded) Apr 29 22:06:45.761: INFO: Init container init1 ready: false, restart count 1 Apr 29 22:06:45.761: INFO: Init container init2 ready: false, restart count 0 Apr 29 22:06:45.761: INFO: Container run1 ready: false, restart count 0 Apr 29 22:06:45.761: INFO: forbid-27521165-t6zxx started at 2022-04-29 22:05:00 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.761: INFO: Container c ready: true, restart count 0 Apr 29 22:06:45.761: INFO: cmk-webhook-6c9d5f8578-b9mdv started at 2022-04-29 20:12:26 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.761: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 22:06:45.761: INFO: node-feature-discovery-worker-jtjjb started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.761: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:06:45.761: INFO: kube-multus-ds-amd64-7slcd started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.761: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:06:45.761: INFO: test-pod started at 2022-04-29 22:05:34 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.761: INFO: Container webserver ready: true, restart count 0 Apr 29 22:06:45.761: INFO: liveness-fc850a46-b0ca-4e19-b7f1-0098d0c3504c started at 2022-04-29 22:05:55 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.761: INFO: Container agnhost-container ready: true, restart count 2 Apr 29 22:06:45.761: INFO: busybox-privileged-false-6a4a7b6e-aa3e-4d43-b4ee-8555341e7e62 started at 2022-04-29 22:06:39 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.761: INFO: Container busybox-privileged-false-6a4a7b6e-aa3e-4d43-b4ee-8555341e7e62 ready: false, restart count 0 Apr 29 22:06:45.761: INFO: externalname-service-4lmb6 started at 2022-04-29 22:06:03 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.761: INFO: Container externalname-service ready: true, restart count 0 Apr 29 22:06:45.761: INFO: cmk-init-discover-node2-csdn7 started at 2022-04-29 20:12:03 +0000 UTC (0+3 container statuses recorded) Apr 29 22:06:45.761: INFO: Container discover ready: false, restart count 0 Apr 29 22:06:45.761: INFO: Container init ready: false, restart count 0 Apr 29 22:06:45.761: INFO: Container install ready: false, restart count 0 Apr 29 22:06:45.761: INFO: collectd-zxs8j started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded) Apr 29 22:06:45.761: INFO: Container collectd ready: true, restart count 0 Apr 29 22:06:45.761: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:06:45.761: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:06:45.761: INFO: pod-exec-websocket-81ed890b-a736-456e-a334-fcd8cf859f8e started at 2022-04-29 22:06:26 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.761: INFO: Container main ready: true, restart count 0 Apr 29 22:06:45.761: INFO: pod-5a04f857-b5e4-4e64-aaf2-5a7bade1c223 started at 2022-04-29 22:06:43 +0000 UTC (0+1 container statuses recorded) Apr 29 22:06:45.761: INFO: Container test-container ready: false, restart count 0 Apr 29 22:06:46.230: INFO: Latency metrics for node node2 Apr 29 22:06:46.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2986" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [163.061 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:06:31.860: Unexpected error: <*errors.errorString | 0xc0040a2eb0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30972 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30972 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":27,"skipped":573,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:43.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 29 22:06:43.815: INFO: Waiting up to 5m0s for pod "pod-5a04f857-b5e4-4e64-aaf2-5a7bade1c223" in namespace "emptydir-189" to be "Succeeded or Failed" Apr 29 22:06:43.817: INFO: Pod "pod-5a04f857-b5e4-4e64-aaf2-5a7bade1c223": Phase="Pending", Reason="", readiness=false. Elapsed: 1.804982ms Apr 29 22:06:45.820: INFO: Pod "pod-5a04f857-b5e4-4e64-aaf2-5a7bade1c223": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004680101s Apr 29 22:06:47.823: INFO: Pod "pod-5a04f857-b5e4-4e64-aaf2-5a7bade1c223": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007690084s STEP: Saw pod success Apr 29 22:06:47.823: INFO: Pod "pod-5a04f857-b5e4-4e64-aaf2-5a7bade1c223" satisfied condition "Succeeded or Failed" Apr 29 22:06:47.826: INFO: Trying to get logs from node node2 pod pod-5a04f857-b5e4-4e64-aaf2-5a7bade1c223 container test-container: STEP: delete the pod Apr 29 22:06:47.840: INFO: Waiting for pod pod-5a04f857-b5e4-4e64-aaf2-5a7bade1c223 to disappear Apr 29 22:06:47.842: INFO: Pod pod-5a04f857-b5e4-4e64-aaf2-5a7bade1c223 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:47.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-189" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":424,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:44.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:06:44.514: INFO: Waiting up to 5m0s for pod "busybox-user-65534-92350086-21f8-49ea-83eb-c26a193bacff" in namespace "security-context-test-5239" to be "Succeeded or Failed" Apr 29 22:06:44.516: INFO: Pod "busybox-user-65534-92350086-21f8-49ea-83eb-c26a193bacff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259605ms Apr 29 22:06:46.519: INFO: Pod "busybox-user-65534-92350086-21f8-49ea-83eb-c26a193bacff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004778418s Apr 29 22:06:48.524: INFO: Pod "busybox-user-65534-92350086-21f8-49ea-83eb-c26a193bacff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009537205s Apr 29 22:06:50.527: INFO: Pod "busybox-user-65534-92350086-21f8-49ea-83eb-c26a193bacff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013140224s Apr 29 22:06:50.527: INFO: Pod "busybox-user-65534-92350086-21f8-49ea-83eb-c26a193bacff" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:50.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5239" for this suite. • [SLOW TEST:6.050 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:46.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:06:46.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7714 create -f -' Apr 29 22:06:46.696: INFO: stderr: "" Apr 29 22:06:46.696: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Apr 29 22:06:46.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7714 create -f -' Apr 29 22:06:47.017: INFO: stderr: "" Apr 29 22:06:47.017: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Apr 29 22:06:48.020: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 22:06:48.020: INFO: Found 0 / 1 Apr 29 22:06:49.021: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 22:06:49.021: INFO: Found 0 / 1 Apr 29 22:06:50.021: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 22:06:50.021: INFO: Found 1 / 1 Apr 29 22:06:50.021: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 29 22:06:50.023: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 22:06:50.023: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 29 22:06:50.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7714 describe pod agnhost-primary-n7dt2' Apr 29 22:06:50.205: INFO: stderr: "" Apr 29 22:06:50.205: INFO: stdout: "Name: agnhost-primary-n7dt2\nNamespace: kubectl-7714\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Fri, 29 Apr 2022 22:06:46 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.204\"\n ],\n \"mac\": \"1e:9e:24:23:67:a7\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.204\"\n ],\n \"mac\": \"1e:9e:24:23:67:a7\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.4.204\nIPs:\n IP: 10.244.4.204\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://8feeec07ac165d2653fbed9bac300b8a0c3df5720434ed0f99dd0c029a82a5ed\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 29 Apr 2022 22:06:49 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l65kc (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-l65kc:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-7714/agnhost-primary-n7dt2 to node2\n Normal Pulling 2s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n Normal Pulled 2s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 282.33848ms\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Apr 29 22:06:50.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7714 describe rc agnhost-primary' Apr 29 22:06:50.400: INFO: stderr: "" Apr 29 22:06:50.400: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7714\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-n7dt2\n" Apr 29 22:06:50.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7714 describe service agnhost-primary' Apr 29 22:06:50.573: INFO: stderr: "" Apr 29 22:06:50.573: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7714\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.233.43.88\nIPs: 10.233.43.88\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.4.204:6379\nSession Affinity: None\nEvents: \n" Apr 29 22:06:50.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7714 describe node master1' Apr 29 22:06:50.790: INFO: stderr: "" Apr 29 22:06:50.790: INFO: stdout: "Name: master1\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n nfd.node.kubernetes.io/master.version: v0.8.2\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 29 Apr 2022 19:57:18 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Fri, 29 Apr 2022 22:06:44 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 29 Apr 2022 20:03:15 +0000 Fri, 29 Apr 2022 20:03:15 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Fri, 29 Apr 2022 22:06:45 +0000 Fri, 29 Apr 2022 19:57:15 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 29 Apr 2022 22:06:45 +0000 Fri, 29 Apr 2022 19:57:15 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 29 Apr 2022 22:06:45 +0000 Fri, 29 Apr 2022 19:57:15 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 29 Apr 2022 22:06:45 +0000 Fri, 29 Apr 2022 20:00:09 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 440625980Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518300Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 406080902496\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629468Ki\n pods: 110\nSystem Info:\n Machine ID: c3419fad4d2d4c5c9574e5b11ef92b4b\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: 5e0f934f-c777-4827-ade6-efec15a825ef\n Kernel Version: 3.10.0-1160.62.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.14\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (10 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system container-registry-65d7c44b96-np5nk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 121m\n kube-system coredns-8474476ff8-59qm6 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 126m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 119m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 128m\n kube-system kube-flannel-cskzh 150m (0%) 300m (0%) 64M (0%) 500M (0%) 126m\n kube-system kube-multus-ds-amd64-w54d6 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 126m\n kube-system kube-proxy-9s46x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 127m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 110m\n kube-system node-feature-discovery-controller-cff799f9f-zpv5m 0 (0%) 0 (0%) 0 (0%) 0 (0%) 118m\n monitoring node-exporter-svkqv 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 113m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1012m (1%) 670m (0%)\n memory 431140Ki (0%) 1003316480 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 29 22:06:50.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7714 describe namespace kubectl-7714' Apr 29 22:06:50.985: INFO: stderr: "" Apr 29 22:06:50.985: INFO: stdout: "Name: kubectl-7714\nLabels: e2e-framework=kubectl\n e2e-run=58db611f-fcc9-45ff-90ed-ec3dd45f58e9\n kubernetes.io/metadata.name=kubectl-7714\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:50.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7714" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":28,"skipped":582,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:47.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container Apr 29 22:06:52.416: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1443 pod-service-account-23fe4187-9580-4b36-b7c0-0954c64bfffe -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 29 22:06:52.928: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1443 pod-service-account-23fe4187-9580-4b36-b7c0-0954c64bfffe -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 29 22:06:53.154: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1443 pod-service-account-23fe4187-9580-4b36-b7c0-0954c64bfffe -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:53.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1443" for this suite. • [SLOW TEST:5.562 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":21,"skipped":432,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:53.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added Apr 29 22:06:53.476: INFO: Found Service test-service-f82mx in namespace services-3860 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Apr 29 22:06:53.476: INFO: Service test-service-f82mx created STEP: Getting /status Apr 29 22:06:53.480: INFO: Service test-service-f82mx has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Apr 29 22:06:53.487: INFO: observed Service test-service-f82mx in namespace services-3860 with annotations: map[] & LoadBalancer: {[]} Apr 29 22:06:53.487: INFO: Found Service test-service-f82mx in namespace services-3860 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Apr 29 22:06:53.487: INFO: Service test-service-f82mx has service status patched STEP: updating the ServiceStatus Apr 29 22:06:53.492: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Apr 29 22:06:53.493: INFO: Observed Service test-service-f82mx in namespace services-3860 with annotations: map[] & Conditions: {[]} Apr 29 22:06:53.493: INFO: Observed event: &Service{ObjectMeta:{test-service-f82mx services-3860 cfc96460-bff0-4e86-9deb-b31fda2e50a9 47398 0 2022-04-29 22:06:53 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2022-04-29 22:06:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.233.1.151,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.233.1.151],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Apr 29 22:06:53.494: INFO: Found Service test-service-f82mx in namespace services-3860 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 29 22:06:53.494: INFO: Service test-service-f82mx has service status updated STEP: patching the service STEP: watching for the Service to be patched Apr 29 22:06:53.507: INFO: observed Service test-service-f82mx in namespace services-3860 with labels: map[test-service-static:true] Apr 29 22:06:53.507: INFO: observed Service test-service-f82mx in namespace services-3860 with labels: map[test-service-static:true] Apr 29 22:06:53.507: INFO: observed Service test-service-f82mx in namespace services-3860 with labels: map[test-service-static:true] Apr 29 22:06:53.507: INFO: Found Service test-service-f82mx in namespace services-3860 with labels: map[test-service:patched test-service-static:true] Apr 29 22:06:53.507: INFO: Service test-service-f82mx patched STEP: deleting the service STEP: watching for the Service to be deleted Apr 29 22:06:53.516: INFO: Observed event: ADDED Apr 29 22:06:53.516: INFO: Observed event: MODIFIED Apr 29 22:06:53.516: INFO: Observed event: MODIFIED Apr 29 22:06:53.516: INFO: Observed event: MODIFIED Apr 29 22:06:53.516: INFO: Found Service test-service-f82mx in namespace services-3860 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Apr 29 22:06:53.516: INFO: Service test-service-f82mx deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:53.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3860" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":22,"skipped":439,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:50.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Apr 29 22:06:50.666: INFO: Waiting up to 5m0s for pod "downward-api-4248e2ff-ebc7-446e-8bbc-a196513e1452" in namespace "downward-api-54" to be "Succeeded or Failed" Apr 29 22:06:50.668: INFO: Pod "downward-api-4248e2ff-ebc7-446e-8bbc-a196513e1452": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099591ms Apr 29 22:06:52.672: INFO: Pod "downward-api-4248e2ff-ebc7-446e-8bbc-a196513e1452": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00624692s Apr 29 22:06:54.678: INFO: Pod "downward-api-4248e2ff-ebc7-446e-8bbc-a196513e1452": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012399299s STEP: Saw pod success Apr 29 22:06:54.678: INFO: Pod "downward-api-4248e2ff-ebc7-446e-8bbc-a196513e1452" satisfied condition "Succeeded or Failed" Apr 29 22:06:54.680: INFO: Trying to get logs from node node2 pod downward-api-4248e2ff-ebc7-446e-8bbc-a196513e1452 container dapi-container: STEP: delete the pod Apr 29 22:06:54.694: INFO: Waiting for pod downward-api-4248e2ff-ebc7-446e-8bbc-a196513e1452 to disappear Apr 29 22:06:54.696: INFO: Pod downward-api-4248e2ff-ebc7-446e-8bbc-a196513e1452 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:54.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-54" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":685,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:54.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-4486/configmap-test-4d82fe94-f048-4819-a811-9be569cdc605 STEP: Creating a pod to test consume configMaps Apr 29 22:06:54.752: INFO: Waiting up to 5m0s for pod "pod-configmaps-e09b087b-d6b5-4930-81ba-c205a26c2fa5" in namespace "configmap-4486" to be "Succeeded or Failed" Apr 29 22:06:54.753: INFO: Pod "pod-configmaps-e09b087b-d6b5-4930-81ba-c205a26c2fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.768922ms Apr 29 22:06:56.758: INFO: Pod "pod-configmaps-e09b087b-d6b5-4930-81ba-c205a26c2fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006335664s Apr 29 22:06:58.764: INFO: Pod "pod-configmaps-e09b087b-d6b5-4930-81ba-c205a26c2fa5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012167641s STEP: Saw pod success Apr 29 22:06:58.764: INFO: Pod "pod-configmaps-e09b087b-d6b5-4930-81ba-c205a26c2fa5" satisfied condition "Succeeded or Failed" Apr 29 22:06:58.766: INFO: Trying to get logs from node node1 pod pod-configmaps-e09b087b-d6b5-4930-81ba-c205a26c2fa5 container env-test: STEP: delete the pod Apr 29 22:06:58.777: INFO: Waiting for pod pod-configmaps-e09b087b-d6b5-4930-81ba-c205a26c2fa5 to disappear Apr 29 22:06:58.780: INFO: Pod pod-configmaps-e09b087b-d6b5-4930-81ba-c205a26c2fa5 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:58.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4486" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":690,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:58.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:06:58.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4098" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":50,"skipped":699,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:58.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Apr 29 22:06:58.914: INFO: The status of Pod pod-update-6b4266c2-077d-40bf-8029-a81ab19c7228 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:07:00.918: INFO: The status of Pod pod-update-6b4266c2-077d-40bf-8029-a81ab19c7228 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:07:02.918: INFO: The status of Pod pod-update-6b4266c2-077d-40bf-8029-a81ab19c7228 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 29 22:07:03.431: INFO: Successfully updated pod "pod-update-6b4266c2-077d-40bf-8029-a81ab19c7228" STEP: verifying the updated pod is in kubernetes Apr 29 22:07:03.435: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:03.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4836" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":719,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:03.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:05.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-1066" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":52,"skipped":808,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:05.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 29 22:07:08.762: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:08.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8000" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:08.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-22fc46de-5a4e-462a-bfdb-4999947dc972 STEP: Creating a pod to test consume secrets Apr 29 22:07:08.868: INFO: Waiting up to 5m0s for pod "pod-secrets-035df0cf-f79a-42db-a16b-3971a8f43bf0" in namespace "secrets-2986" to be "Succeeded or Failed" Apr 29 22:07:08.870: INFO: Pod "pod-secrets-035df0cf-f79a-42db-a16b-3971a8f43bf0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.851954ms Apr 29 22:07:10.874: INFO: Pod "pod-secrets-035df0cf-f79a-42db-a16b-3971a8f43bf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005149306s Apr 29 22:07:12.877: INFO: Pod "pod-secrets-035df0cf-f79a-42db-a16b-3971a8f43bf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008053281s STEP: Saw pod success Apr 29 22:07:12.877: INFO: Pod "pod-secrets-035df0cf-f79a-42db-a16b-3971a8f43bf0" satisfied condition "Succeeded or Failed" Apr 29 22:07:12.879: INFO: Trying to get logs from node node1 pod pod-secrets-035df0cf-f79a-42db-a16b-3971a8f43bf0 container secret-volume-test: STEP: delete the pod Apr 29 22:07:12.892: INFO: Waiting for pod pod-secrets-035df0cf-f79a-42db-a16b-3971a8f43bf0 to disappear Apr 29 22:07:12.893: INFO: Pod pod-secrets-035df0cf-f79a-42db-a16b-3971a8f43bf0 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:12.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2986" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":851,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:51.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1759 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1759 STEP: creating replication controller externalsvc in namespace services-1759 I0429 22:06:51.072819 32 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1759, replica count: 2 I0429 22:06:54.124417 32 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:06:57.126322 32 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 29 22:06:57.139: INFO: Creating new exec pod Apr 29 22:07:01.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1759 exec execpodsk9rd -- /bin/sh -x -c nslookup clusterip-service.services-1759.svc.cluster.local' Apr 29 22:07:01.437: INFO: stderr: "+ nslookup clusterip-service.services-1759.svc.cluster.local\n" Apr 29 22:07:01.437: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-1759.svc.cluster.local\tcanonical name = externalsvc.services-1759.svc.cluster.local.\nName:\texternalsvc.services-1759.svc.cluster.local\nAddress: 10.233.52.100\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1759, will wait for the garbage collector to delete the pods Apr 29 22:07:01.496: INFO: Deleting ReplicationController externalsvc took: 5.239579ms Apr 29 22:07:01.597: INFO: Terminating ReplicationController externalsvc pods took: 100.649233ms Apr 29 22:07:15.206: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:15.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1759" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:24.180 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:12.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 22:07:13.462: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 22:07:15.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866833, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866833, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866833, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866833, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:07:18.481: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:18.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1392" for this suite. STEP: Destroying namespace "webhook-1392-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.600 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":55,"skipped":870,"failed":0} SSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":29,"skipped":603,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:15.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 29 22:07:15.252: INFO: Waiting up to 5m0s for pod "pod-e75dfe2b-fe37-4b1f-92c2-2d75e465190d" in namespace "emptydir-50" to be "Succeeded or Failed" Apr 29 22:07:15.254: INFO: Pod "pod-e75dfe2b-fe37-4b1f-92c2-2d75e465190d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244001ms Apr 29 22:07:17.258: INFO: Pod "pod-e75dfe2b-fe37-4b1f-92c2-2d75e465190d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005886182s Apr 29 22:07:19.263: INFO: Pod "pod-e75dfe2b-fe37-4b1f-92c2-2d75e465190d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010850645s STEP: Saw pod success Apr 29 22:07:19.263: INFO: Pod "pod-e75dfe2b-fe37-4b1f-92c2-2d75e465190d" satisfied condition "Succeeded or Failed" Apr 29 22:07:19.265: INFO: Trying to get logs from node node2 pod pod-e75dfe2b-fe37-4b1f-92c2-2d75e465190d container test-container: STEP: delete the pod Apr 29 22:07:19.280: INFO: Waiting for pod pod-e75dfe2b-fe37-4b1f-92c2-2d75e465190d to disappear Apr 29 22:07:19.282: INFO: Pod pod-e75dfe2b-fe37-4b1f-92c2-2d75e465190d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:19.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-50" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":603,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:37.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Apr 29 22:06:37.291: INFO: PodSpec: initContainers in spec.initContainers Apr 29 22:07:22.328: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-dfa50435-051e-4eb8-96e8-ed91a958dba2", GenerateName:"", Namespace:"init-container-397", SelfLink:"", UID:"02cc953e-4e54-4b04-b7e6-c14224fe8184", ResourceVersion:"48005", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63786866797, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"291929896"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.200\"\n ],\n \"mac\": \"32:e6:d8:d3:5b:24\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.200\"\n ],\n \"mac\": \"32:e6:d8:d3:5b:24\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0039ce030), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0039ce048)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0039ce060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0039ce078)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0039ce090), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0039ce0a8)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-9gr9p", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0047aa000), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-9gr9p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-9gr9p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-9gr9p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000abe0f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c02070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000abe180)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000abe1a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000abe1a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000abe1ac), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001ef8030), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866797, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866797, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866797, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866797, loc:(*time.Location)(0x9e2e180)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.208", PodIP:"10.244.4.200", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.4.200"}}, StartTime:(*v1.Time)(0xc0039ce0d8), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c02150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c021c0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://4d6062d780a5d1ec7b808b2cc2b320cd8cd4d3e1ee13bd074a080c2dcb66cb98", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0047aa080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0047aa060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc000abe23f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:22.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-397" for this suite. • [SLOW TEST:45.064 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":37,"skipped":759,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:22.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:22.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2355" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":38,"skipped":805,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:18.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Apr 29 22:07:18.593: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b278a571-638b-4fdc-8edf-b17bc49f3262" in namespace "downward-api-8639" to be "Succeeded or Failed" Apr 29 22:07:18.595: INFO: Pod "downwardapi-volume-b278a571-638b-4fdc-8edf-b17bc49f3262": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175264ms Apr 29 22:07:20.598: INFO: Pod "downwardapi-volume-b278a571-638b-4fdc-8edf-b17bc49f3262": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005072822s Apr 29 22:07:22.601: INFO: Pod "downwardapi-volume-b278a571-638b-4fdc-8edf-b17bc49f3262": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008415906s STEP: Saw pod success Apr 29 22:07:22.601: INFO: Pod "downwardapi-volume-b278a571-638b-4fdc-8edf-b17bc49f3262" satisfied condition "Succeeded or Failed" Apr 29 22:07:22.605: INFO: Trying to get logs from node node2 pod downwardapi-volume-b278a571-638b-4fdc-8edf-b17bc49f3262 container client-container: STEP: delete the pod Apr 29 22:07:22.618: INFO: Waiting for pod downwardapi-volume-b278a571-638b-4fdc-8edf-b17bc49f3262 to disappear Apr 29 22:07:22.620: INFO: Pod downwardapi-volume-b278a571-638b-4fdc-8edf-b17bc49f3262 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:22.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8639" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":880,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:22.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:22.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-67" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":57,"skipped":913,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:19.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 29 22:07:19.325: INFO: Waiting up to 5m0s for pod "pod-7759d18e-d2ba-48e5-8ad3-3cb00805ca2c" in namespace "emptydir-5076" to be "Succeeded or Failed" Apr 29 22:07:19.327: INFO: Pod "pod-7759d18e-d2ba-48e5-8ad3-3cb00805ca2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130432ms Apr 29 22:07:21.331: INFO: Pod "pod-7759d18e-d2ba-48e5-8ad3-3cb00805ca2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005807324s Apr 29 22:07:23.336: INFO: Pod "pod-7759d18e-d2ba-48e5-8ad3-3cb00805ca2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011306569s STEP: Saw pod success Apr 29 22:07:23.336: INFO: Pod "pod-7759d18e-d2ba-48e5-8ad3-3cb00805ca2c" satisfied condition "Succeeded or Failed" Apr 29 22:07:23.340: INFO: Trying to get logs from node node2 pod pod-7759d18e-d2ba-48e5-8ad3-3cb00805ca2c container test-container: STEP: delete the pod Apr 29 22:07:23.353: INFO: Waiting for pod pod-7759d18e-d2ba-48e5-8ad3-3cb00805ca2c to disappear Apr 29 22:07:23.355: INFO: Pod pod-7759d18e-d2ba-48e5-8ad3-3cb00805ca2c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:23.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5076" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":604,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:45.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod Apr 29 22:06:45.759: INFO: Successfully updated pod "var-expansion-b797e0f3-16a6-40da-a8a6-59c61b1f3c8d" STEP: waiting for pod running STEP: deleting the pod gracefully Apr 29 22:06:47.766: INFO: Deleting pod "var-expansion-b797e0f3-16a6-40da-a8a6-59c61b1f3c8d" in namespace "var-expansion-1695" Apr 29 22:06:47.771: INFO: Wait up to 5m0s for pod "var-expansion-b797e0f3-16a6-40da-a8a6-59c61b1f3c8d" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:23.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1695" for this suite. • [SLOW TEST:158.579 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":20,"skipped":278,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:23.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command Apr 29 22:07:23.420: INFO: Waiting up to 5m0s for pod "client-containers-7eb4386d-46d1-4c9b-9fcb-f874406ba899" in namespace "containers-2027" to be "Succeeded or Failed" Apr 29 22:07:23.423: INFO: Pod "client-containers-7eb4386d-46d1-4c9b-9fcb-f874406ba899": Phase="Pending", Reason="", readiness=false. Elapsed: 2.941107ms Apr 29 22:07:25.428: INFO: Pod "client-containers-7eb4386d-46d1-4c9b-9fcb-f874406ba899": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007139821s Apr 29 22:07:27.431: INFO: Pod "client-containers-7eb4386d-46d1-4c9b-9fcb-f874406ba899": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010029101s Apr 29 22:07:29.436: INFO: Pod "client-containers-7eb4386d-46d1-4c9b-9fcb-f874406ba899": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015067819s Apr 29 22:07:31.440: INFO: Pod "client-containers-7eb4386d-46d1-4c9b-9fcb-f874406ba899": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019226881s Apr 29 22:07:33.447: INFO: Pod "client-containers-7eb4386d-46d1-4c9b-9fcb-f874406ba899": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.026259773s STEP: Saw pod success Apr 29 22:07:33.447: INFO: Pod "client-containers-7eb4386d-46d1-4c9b-9fcb-f874406ba899" satisfied condition "Succeeded or Failed" Apr 29 22:07:33.449: INFO: Trying to get logs from node node1 pod client-containers-7eb4386d-46d1-4c9b-9fcb-f874406ba899 container agnhost-container: STEP: delete the pod Apr 29 22:07:33.465: INFO: Waiting for pod client-containers-7eb4386d-46d1-4c9b-9fcb-f874406ba899 to disappear Apr 29 22:07:33.466: INFO: Pod client-containers-7eb4386d-46d1-4c9b-9fcb-f874406ba899 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:33.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2027" for this suite. • [SLOW TEST:10.087 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":616,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:22.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:33.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2201" for this suite. • [SLOW TEST:10.952 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":58,"skipped":949,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:33.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Apr 29 22:07:33.791: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Apr 29 22:07:33.794: INFO: starting watch STEP: patching STEP: updating Apr 29 22:07:33.804: INFO: waiting for watch events with expected annotations Apr 29 22:07:33.804: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:33.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-6803" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":59,"skipped":950,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:23.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics Apr 29 22:07:33.918: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Apr 29 22:07:33.985: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Apr 29 22:07:33.985: INFO: Deleting pod "simpletest-rc-to-be-deleted-2kg2m" in namespace "gc-895" Apr 29 22:07:33.992: INFO: Deleting pod "simpletest-rc-to-be-deleted-59pc6" in namespace "gc-895" Apr 29 22:07:34.000: INFO: Deleting pod "simpletest-rc-to-be-deleted-887m5" in namespace "gc-895" Apr 29 22:07:34.007: INFO: Deleting pod "simpletest-rc-to-be-deleted-d57xr" in namespace "gc-895" Apr 29 22:07:34.013: INFO: Deleting pod "simpletest-rc-to-be-deleted-fjqhg" in namespace "gc-895" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:34.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-895" for this suite. • [SLOW TEST:10.230 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":21,"skipped":281,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:34.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Apr 29 22:07:34.135: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Apr 29 22:07:34.139: INFO: starting watch STEP: patching STEP: updating Apr 29 22:07:34.159: INFO: waiting for watch events with expected annotations Apr 29 22:07:34.159: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:34.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-2354" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":22,"skipped":318,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:33.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:07:33.892: INFO: Got root ca configmap in namespace "svcaccounts-8931" Apr 29 22:07:33.896: INFO: Deleted root ca configmap in namespace "svcaccounts-8931" STEP: waiting for a new root ca configmap created Apr 29 22:07:34.400: INFO: Recreated root ca configmap in namespace "svcaccounts-8931" Apr 29 22:07:34.403: INFO: Updated root ca configmap in namespace "svcaccounts-8931" STEP: waiting for the root ca configmap reconciled Apr 29 22:07:34.907: INFO: Reconciled root ca configmap in namespace "svcaccounts-8931" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:34.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8931" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":60,"skipped":960,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:33.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-777acda9-a365-479c-8f07-f78a0afccfc5 STEP: Creating a pod to test consume configMaps Apr 29 22:07:33.534: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-103854ed-aaaa-4aea-836d-c12c74647b3f" in namespace "projected-253" to be "Succeeded or Failed" Apr 29 22:07:33.537: INFO: Pod "pod-projected-configmaps-103854ed-aaaa-4aea-836d-c12c74647b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.770294ms Apr 29 22:07:35.540: INFO: Pod "pod-projected-configmaps-103854ed-aaaa-4aea-836d-c12c74647b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006194052s Apr 29 22:07:37.545: INFO: Pod "pod-projected-configmaps-103854ed-aaaa-4aea-836d-c12c74647b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011253652s Apr 29 22:07:39.549: INFO: Pod "pod-projected-configmaps-103854ed-aaaa-4aea-836d-c12c74647b3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014983622s STEP: Saw pod success Apr 29 22:07:39.549: INFO: Pod "pod-projected-configmaps-103854ed-aaaa-4aea-836d-c12c74647b3f" satisfied condition "Succeeded or Failed" Apr 29 22:07:39.551: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-103854ed-aaaa-4aea-836d-c12c74647b3f container agnhost-container: STEP: delete the pod Apr 29 22:07:39.565: INFO: Waiting for pod pod-projected-configmaps-103854ed-aaaa-4aea-836d-c12c74647b3f to disappear Apr 29 22:07:39.567: INFO: Pod pod-projected-configmaps-103854ed-aaaa-4aea-836d-c12c74647b3f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:39.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-253" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":627,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:53.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-9019 Apr 29 22:06:53.572: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:06:55.575: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:06:57.576: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Apr 29 22:06:57.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9019 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Apr 29 22:06:57.841: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Apr 29 22:06:57.841: INFO: stdout: "iptables" Apr 29 22:06:57.841: INFO: proxyMode: iptables Apr 29 22:06:57.849: INFO: Waiting for pod kube-proxy-mode-detector to disappear Apr 29 22:06:57.851: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-9019 STEP: creating replication controller affinity-clusterip-timeout in namespace services-9019 I0429 22:06:57.862429 31 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-9019, replica count: 3 I0429 22:07:00.914295 31 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:07:03.916236 31 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 22:07:03.921: INFO: Creating new exec pod Apr 29 22:07:08.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9019 exec execpod-affinityx6lcm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Apr 29 22:07:09.350: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip-timeout 80\n+ echo hostName\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Apr 29 22:07:09.350: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 29 22:07:09.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9019 exec execpod-affinityx6lcm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.19.149 80' Apr 29 22:07:09.769: INFO: stderr: "+ nc -v -t -w 2 10.233.19.149 80\n+ echo hostName\nConnection to 10.233.19.149 80 port [tcp/http] succeeded!\n" Apr 29 22:07:09.769: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 29 22:07:09.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9019 exec execpod-affinityx6lcm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.19.149:80/ ; done' Apr 29 22:07:10.070: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n" Apr 29 22:07:10.070: INFO: stdout: "\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42\naffinity-clusterip-timeout-kbr42" Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Received response from host: affinity-clusterip-timeout-kbr42 Apr 29 22:07:10.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9019 exec execpod-affinityx6lcm -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.19.149:80/' Apr 29 22:07:10.605: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n" Apr 29 22:07:10.605: INFO: stdout: "affinity-clusterip-timeout-kbr42" Apr 29 22:07:30.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9019 exec execpod-affinityx6lcm -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.19.149:80/' Apr 29 22:07:30.968: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.19.149:80/\n" Apr 29 22:07:30.968: INFO: stdout: "affinity-clusterip-timeout-56l2s" Apr 29 22:07:30.968: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-9019, will wait for the garbage collector to delete the pods Apr 29 22:07:31.035: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.371064ms Apr 29 22:07:31.136: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.818576ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:45.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9019" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:51.713 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":23,"skipped":445,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:34.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:51.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6504" for this suite. • [SLOW TEST:17.060 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":23,"skipped":322,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:39.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 29 22:07:40.070: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 29 22:07:42.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866860, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866860, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866860, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866860, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:07:45.095: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:07:45.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:53.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2426" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.641 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:34.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-7675 STEP: creating service affinity-clusterip in namespace services-7675 STEP: creating replication controller affinity-clusterip in namespace services-7675 I0429 22:07:34.979311 26 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-7675, replica count: 3 I0429 22:07:38.030265 26 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:07:41.030881 26 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 22:07:41.036: INFO: Creating new exec pod Apr 29 22:07:46.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7675 exec execpod-affinityntp4b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Apr 29 22:07:46.304: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Apr 29 22:07:46.304: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 29 22:07:46.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7675 exec execpod-affinityntp4b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.53.172 80' Apr 29 22:07:46.562: INFO: stderr: "+ nc -v -t -w 2 10.233.53.172 80\n+ echo hostName\nConnection to 10.233.53.172 80 port [tcp/http] succeeded!\n" Apr 29 22:07:46.562: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 29 22:07:46.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7675 exec execpod-affinityntp4b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.53.172:80/ ; done' Apr 29 22:07:46.858: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.172:80/\n" Apr 29 22:07:46.858: INFO: stdout: "\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b\naffinity-clusterip-zzk2b" Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Received response from host: affinity-clusterip-zzk2b Apr 29 22:07:46.858: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-7675, will wait for the garbage collector to delete the pods Apr 29 22:07:46.924: INFO: Deleting ReplicationController affinity-clusterip took: 3.799316ms Apr 29 22:07:47.025: INFO: Terminating ReplicationController affinity-clusterip pods took: 101.004616ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:55.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7675" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:20.290 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":61,"skipped":978,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:55.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:55.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1005" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":62,"skipped":994,"failed":0} SSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:22.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:07:22.530: INFO: created pod Apr 29 22:07:22.530: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-2039" to be "Succeeded or Failed" Apr 29 22:07:22.532: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168865ms Apr 29 22:07:24.540: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010154003s Apr 29 22:07:26.546: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015973371s Apr 29 22:07:28.549: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019517579s STEP: Saw pod success Apr 29 22:07:28.550: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Apr 29 22:07:58.550: INFO: polling logs Apr 29 22:07:58.556: INFO: Pod logs: 2022/04/29 22:07:24 OK: Got token 2022/04/29 22:07:24 validating with in-cluster discovery 2022/04/29 22:07:24 OK: got issuer https://kubernetes.default.svc.cluster.local 2022/04/29 22:07:24 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-2039:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1651270642, NotBefore:1651270042, IssuedAt:1651270042, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-2039", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"ea060f14-907e-4705-9508-16b184f199a0"}}} 2022/04/29 22:07:24 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2022/04/29 22:07:24 OK: Validated signature on JWT 2022/04/29 22:07:24 OK: Got valid claims from token! 2022/04/29 22:07:24 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-2039:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1651270642, NotBefore:1651270042, IssuedAt:1651270042, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-2039", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"ea060f14-907e-4705-9508-16b184f199a0"}}} Apr 29 22:07:58.556: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:58.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2039" for this suite. • [SLOW TEST:36.071 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":39,"skipped":822,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":34,"skipped":632,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:53.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-1bbe8ec7-d547-4c3f-a951-8bc2e8087d65 STEP: Creating a pod to test consume secrets Apr 29 22:07:53.278: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-905f06a0-a3e4-4d1d-be9a-7fd56907465a" in namespace "projected-4008" to be "Succeeded or Failed" Apr 29 22:07:53.280: INFO: Pod "pod-projected-secrets-905f06a0-a3e4-4d1d-be9a-7fd56907465a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3151ms Apr 29 22:07:55.285: INFO: Pod "pod-projected-secrets-905f06a0-a3e4-4d1d-be9a-7fd56907465a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00719576s Apr 29 22:07:57.290: INFO: Pod "pod-projected-secrets-905f06a0-a3e4-4d1d-be9a-7fd56907465a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012213529s Apr 29 22:07:59.294: INFO: Pod "pod-projected-secrets-905f06a0-a3e4-4d1d-be9a-7fd56907465a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015607694s STEP: Saw pod success Apr 29 22:07:59.294: INFO: Pod "pod-projected-secrets-905f06a0-a3e4-4d1d-be9a-7fd56907465a" satisfied condition "Succeeded or Failed" Apr 29 22:07:59.296: INFO: Trying to get logs from node node2 pod pod-projected-secrets-905f06a0-a3e4-4d1d-be9a-7fd56907465a container projected-secret-volume-test: STEP: delete the pod Apr 29 22:07:59.309: INFO: Waiting for pod pod-projected-secrets-905f06a0-a3e4-4d1d-be9a-7fd56907465a to disappear Apr 29 22:07:59.311: INFO: Pod pod-projected-secrets-905f06a0-a3e4-4d1d-be9a-7fd56907465a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:07:59.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4008" for this suite. • [SLOW TEST:6.090 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":632,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:44.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0429 22:06:44.211478 38 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:08:00.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-145" for this suite. • [SLOW TEST:76.041 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":38,"skipped":600,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:51.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 22:07:51.503: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 22:07:53.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866871, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866871, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866871, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866871, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 22:07:56.524: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 29 22:08:00.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-7212 attach --namespace=webhook-7212 to-be-attached-pod -i -c=container1' Apr 29 22:08:00.721: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:08:00.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7212" for this suite. STEP: Destroying namespace "webhook-7212-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.483 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":24,"skipped":327,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:08:00.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 29 22:08:00.307: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-6241 b29ce510-1334-4a2e-81b5-e036fa8a97b4 49119 0 2022-04-29 22:08:00 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-04-29 22:08:00 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c8ld9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c8ld9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 29 22:08:00.313: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:08:02.317: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:08:04.318: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 29 22:08:04.318: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6241 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 22:08:04.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Apr 29 22:08:04.420: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6241 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 22:08:04.420: INFO: >>> kubeConfig: /root/.kube/config Apr 29 22:08:04.534: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:08:04.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6241" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":39,"skipped":624,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:45.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:07:49.314: INFO: Deleting pod "var-expansion-c0f96d6f-1042-4fc6-aca6-50ca8c496084" in namespace "var-expansion-8704" Apr 29 22:07:49.318: INFO: Wait up to 5m0s for pod "var-expansion-c0f96d6f-1042-4fc6-aca6-50ca8c496084" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:08:05.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8704" for this suite. • [SLOW TEST:20.061 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":24,"skipped":454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:59.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-126b2b97-1700-4fe6-a845-8ed2459402d3 STEP: Creating a pod to test consume secrets Apr 29 22:07:59.367: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f899f237-913b-44b3-b17c-2f3990e21efa" in namespace "projected-5128" to be "Succeeded or Failed" Apr 29 22:07:59.369: INFO: Pod "pod-projected-secrets-f899f237-913b-44b3-b17c-2f3990e21efa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.56792ms Apr 29 22:08:01.373: INFO: Pod "pod-projected-secrets-f899f237-913b-44b3-b17c-2f3990e21efa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006098328s Apr 29 22:08:03.377: INFO: Pod "pod-projected-secrets-f899f237-913b-44b3-b17c-2f3990e21efa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010250357s Apr 29 22:08:05.383: INFO: Pod "pod-projected-secrets-f899f237-913b-44b3-b17c-2f3990e21efa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016459535s STEP: Saw pod success Apr 29 22:08:05.383: INFO: Pod "pod-projected-secrets-f899f237-913b-44b3-b17c-2f3990e21efa" satisfied condition "Succeeded or Failed" Apr 29 22:08:05.385: INFO: Trying to get logs from node node1 pod pod-projected-secrets-f899f237-913b-44b3-b17c-2f3990e21efa container projected-secret-volume-test: STEP: delete the pod Apr 29 22:08:05.400: INFO: Waiting for pod pod-projected-secrets-f899f237-913b-44b3-b17c-2f3990e21efa to disappear Apr 29 22:08:05.402: INFO: Pod pod-projected-secrets-f899f237-913b-44b3-b17c-2f3990e21efa no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:08:05.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5128" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":639,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:08:05.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:08:05.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3683" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":37,"skipped":661,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:55.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:07:55.354: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 29 22:08:03.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9300 --namespace=crd-publish-openapi-9300 create -f -' Apr 29 22:08:04.491: INFO: stderr: "" Apr 29 22:08:04.491: INFO: stdout: "e2e-test-crd-publish-openapi-6101-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 29 22:08:04.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9300 --namespace=crd-publish-openapi-9300 delete e2e-test-crd-publish-openapi-6101-crds test-cr' Apr 29 22:08:04.678: INFO: stderr: "" Apr 29 22:08:04.678: INFO: stdout: "e2e-test-crd-publish-openapi-6101-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 29 22:08:04.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9300 --namespace=crd-publish-openapi-9300 apply -f -' Apr 29 22:08:05.018: INFO: stderr: "" Apr 29 22:08:05.018: INFO: stdout: "e2e-test-crd-publish-openapi-6101-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 29 22:08:05.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9300 --namespace=crd-publish-openapi-9300 delete e2e-test-crd-publish-openapi-6101-crds test-cr' Apr 29 22:08:05.186: INFO: stderr: "" Apr 29 22:08:05.186: INFO: stdout: "e2e-test-crd-publish-openapi-6101-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 29 22:08:05.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9300 explain e2e-test-crd-publish-openapi-6101-crds' Apr 29 22:08:05.565: INFO: stderr: "" Apr 29 22:08:05.565: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6101-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:08:09.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9300" for this suite. • [SLOW TEST:13.953 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":63,"skipped":997,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:08:09.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics Apr 29 22:08:15.391: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Apr 29 22:08:15.453: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:08:15.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3035" for this suite. • [SLOW TEST:6.139 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":64,"skipped":1015,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Apr 29 22:08:15.541: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:08:00.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:08:00.793: INFO: Creating deployment "test-recreate-deployment" Apr 29 22:08:00.796: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 29 22:08:00.800: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 29 22:08:02.808: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 29 22:08:02.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:08:04.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:08:06.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:08:08.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:08:10.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:08:12.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:08:14.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:08:16.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786866880, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 22:08:18.818: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 29 22:08:18.824: INFO: Updating deployment test-recreate-deployment Apr 29 22:08:18.825: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Apr 29 22:08:18.867: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5683 88274bed-10e5-4f1c-b626-dac0d7bc104f 49807 2 2022-04-29 22:08:00 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-29 22:08:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-04-29 22:08:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004741fc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-04-29 22:08:18 +0000 UTC,LastTransitionTime:2022-04-29 22:08:18 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2022-04-29 22:08:18 +0000 UTC,LastTransitionTime:2022-04-29 22:08:00 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 29 22:08:18.870: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-5683 93761165-0da9-43dc-a4bb-2ec69ade61c8 49805 1 2022-04-29 22:08:18 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 88274bed-10e5-4f1c-b626-dac0d7bc104f 0xc00097e440 0xc00097e441}] [] [{kube-controller-manager Update apps/v1 2022-04-29 22:08:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"88274bed-10e5-4f1c-b626-dac0d7bc104f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00097e4b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 29 22:08:18.870: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 29 22:08:18.871: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-5683 f55d3b3c-32cc-4aa5-bff9-97f1829b2e9e 49796 2 2022-04-29 22:08:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 88274bed-10e5-4f1c-b626-dac0d7bc104f 0xc00097e347 0xc00097e348}] [] [{kube-controller-manager Update apps/v1 2022-04-29 22:08:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"88274bed-10e5-4f1c-b626-dac0d7bc104f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00097e3d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 29 22:08:18.875: INFO: Pod "test-recreate-deployment-85d47dcb4-zrhbz" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-zrhbz test-recreate-deployment-85d47dcb4- deployment-5683 0862e070-e5a0-45e5-b055-96baa06c885d 49804 0 2022-04-29 22:08:18 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 93761165-0da9-43dc-a4bb-2ec69ade61c8 0xc00097e8ef 0xc00097e900}] [] [{kube-controller-manager Update v1 2022-04-29 22:08:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93761165-0da9-43dc-a4bb-2ec69ade61c8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bv8lv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bv8lv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 22:08:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:08:18.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5683" for this suite. • [SLOW TEST:18.111 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:06:03.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3767 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3767 I0429 22:06:03.555721 28 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3767, replica count: 2 I0429 22:06:06.606735 28 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 22:06:09.607676 28 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 22:06:09.607: INFO: Creating new exec pod Apr 29 22:06:14.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 29 22:06:14.911: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 29 22:06:14.911: INFO: stdout: "" Apr 29 22:06:15.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 29 22:06:16.364: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 29 22:06:16.364: INFO: stdout: "" Apr 29 22:06:16.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 29 22:06:17.197: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 29 22:06:17.197: INFO: stdout: "" Apr 29 22:06:17.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 29 22:06:18.195: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 29 22:06:18.196: INFO: stdout: "externalname-service-4lmb6" Apr 29 22:06:18.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.62.209 80' Apr 29 22:06:18.481: INFO: stderr: "+ nc -v -t -w 2 10.233.62.209 80\nConnection to 10.233.62.209 80 port [tcp/http] succeeded!\n+ echo hostName\n" Apr 29 22:06:18.481: INFO: stdout: "externalname-service-2qv2w" Apr 29 22:06:18.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:18.748: INFO: rc: 1 Apr 29 22:06:18.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:19.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:20.002: INFO: rc: 1 Apr 29 22:06:20.002: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:20.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:20.994: INFO: rc: 1 Apr 29 22:06:20.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:21.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:22.008: INFO: rc: 1 Apr 29 22:06:22.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:22.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:23.137: INFO: rc: 1 Apr 29 22:06:23.137: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:23.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:24.008: INFO: rc: 1 Apr 29 22:06:24.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:24.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:25.000: INFO: rc: 1 Apr 29 22:06:25.000: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:25.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:26.289: INFO: rc: 1 Apr 29 22:06:26.290: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:26.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:26.989: INFO: rc: 1 Apr 29 22:06:26.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:27.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:28.005: INFO: rc: 1 Apr 29 22:06:28.005: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:28.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:29.008: INFO: rc: 1 Apr 29 22:06:29.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:29.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:29.982: INFO: rc: 1 Apr 29 22:06:29.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:30.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:31.006: INFO: rc: 1 Apr 29 22:06:31.006: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:31.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:32.114: INFO: rc: 1 Apr 29 22:06:32.114: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:32.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:33.020: INFO: rc: 1 Apr 29 22:06:33.020: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:33.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:34.106: INFO: rc: 1 Apr 29 22:06:34.106: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:34.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:35.001: INFO: rc: 1 Apr 29 22:06:35.001: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:35.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:36.007: INFO: rc: 1 Apr 29 22:06:36.007: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:36.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:36.997: INFO: rc: 1 Apr 29 22:06:36.997: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:37.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:38.033: INFO: rc: 1 Apr 29 22:06:38.034: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:38.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:38.990: INFO: rc: 1 Apr 29 22:06:38.991: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:39.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:40.004: INFO: rc: 1 Apr 29 22:06:40.004: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:40.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:40.989: INFO: rc: 1 Apr 29 22:06:40.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:41.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:41.994: INFO: rc: 1 Apr 29 22:06:41.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:42.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:43.010: INFO: rc: 1 Apr 29 22:06:43.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:43.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:43.989: INFO: rc: 1 Apr 29 22:06:43.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:44.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:45.014: INFO: rc: 1 Apr 29 22:06:45.014: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:45.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:46.000: INFO: rc: 1 Apr 29 22:06:46.000: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:46.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:46.969: INFO: rc: 1 Apr 29 22:06:46.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:47.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:48.025: INFO: rc: 1 Apr 29 22:06:48.025: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:48.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:48.993: INFO: rc: 1 Apr 29 22:06:48.993: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:49.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:50.006: INFO: rc: 1 Apr 29 22:06:50.006: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:50.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:51.004: INFO: rc: 1 Apr 29 22:06:51.004: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:51.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:52.224: INFO: rc: 1 Apr 29 22:06:52.224: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:52.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:53.029: INFO: rc: 1 Apr 29 22:06:53.029: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:53.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:53.994: INFO: rc: 1 Apr 29 22:06:53.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:54.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:54.999: INFO: rc: 1 Apr 29 22:06:54.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:55.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:56.431: INFO: rc: 1 Apr 29 22:06:56.431: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:56.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:57.005: INFO: rc: 1 Apr 29 22:06:57.006: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:57.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:57.997: INFO: rc: 1 Apr 29 22:06:57.997: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:58.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:06:59.347: INFO: rc: 1 Apr 29 22:06:59.347: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:06:59.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:00.329: INFO: rc: 1 Apr 29 22:07:00.329: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:00.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:01.007: INFO: rc: 1 Apr 29 22:07:01.007: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:01.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:01.998: INFO: rc: 1 Apr 29 22:07:01.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:02.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:02.981: INFO: rc: 1 Apr 29 22:07:02.981: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:03.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:04.092: INFO: rc: 1 Apr 29 22:07:04.092: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:04.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:05.073: INFO: rc: 1 Apr 29 22:07:05.073: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:05.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:06.046: INFO: rc: 1 Apr 29 22:07:06.046: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:06.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:07.366: INFO: rc: 1 Apr 29 22:07:07.367: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:07.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:08.042: INFO: rc: 1 Apr 29 22:07:08.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:08.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:09.030: INFO: rc: 1 Apr 29 22:07:09.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:09.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:10.021: INFO: rc: 1 Apr 29 22:07:10.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:10.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:11.037: INFO: rc: 1 Apr 29 22:07:11.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:11.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:12.012: INFO: rc: 1 Apr 29 22:07:12.012: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:12.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:12.989: INFO: rc: 1 Apr 29 22:07:12.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:13.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:14.177: INFO: rc: 1 Apr 29 22:07:14.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:14.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:15.070: INFO: rc: 1 Apr 29 22:07:15.070: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:15.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:16.001: INFO: rc: 1 Apr 29 22:07:16.001: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:16.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:17.005: INFO: rc: 1 Apr 29 22:07:17.005: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:17.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:17.978: INFO: rc: 1 Apr 29 22:07:17.978: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:18.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:19.222: INFO: rc: 1 Apr 29 22:07:19.223: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:19.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:20.007: INFO: rc: 1 Apr 29 22:07:20.007: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:20.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:21.264: INFO: rc: 1 Apr 29 22:07:21.264: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:21.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:22.478: INFO: rc: 1 Apr 29 22:07:22.478: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:22.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:23.177: INFO: rc: 1 Apr 29 22:07:23.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:23.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:24.836: INFO: rc: 1 Apr 29 22:07:24.836: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:25.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:27.489: INFO: rc: 1 Apr 29 22:07:27.489: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:27.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:28.414: INFO: rc: 1 Apr 29 22:07:28.414: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:28.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:29.085: INFO: rc: 1 Apr 29 22:07:29.085: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:29.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:30.620: INFO: rc: 1 Apr 29 22:07:30.620: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:30.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:31.211: INFO: rc: 1 Apr 29 22:07:31.211: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:31.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:32.189: INFO: rc: 1 Apr 29 22:07:32.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:32.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:33.125: INFO: rc: 1 Apr 29 22:07:33.125: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:33.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:34.047: INFO: rc: 1 Apr 29 22:07:34.047: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:34.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:35.041: INFO: rc: 1 Apr 29 22:07:35.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:35.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:36.043: INFO: rc: 1 Apr 29 22:07:36.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:36.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:37.098: INFO: rc: 1 Apr 29 22:07:37.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:37.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:38.050: INFO: rc: 1 Apr 29 22:07:38.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:38.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:38.994: INFO: rc: 1 Apr 29 22:07:38.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:39.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:39.980: INFO: rc: 1 Apr 29 22:07:39.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:40.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:40.963: INFO: rc: 1 Apr 29 22:07:40.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:41.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:42.300: INFO: rc: 1 Apr 29 22:07:42.300: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:42.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:43.004: INFO: rc: 1 Apr 29 22:07:43.004: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:43.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:43.985: INFO: rc: 1 Apr 29 22:07:43.986: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:44.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:44.998: INFO: rc: 1 Apr 29 22:07:44.998: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:45.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:45.992: INFO: rc: 1 Apr 29 22:07:45.992: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:46.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:47.034: INFO: rc: 1 Apr 29 22:07:47.034: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:47.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:48.005: INFO: rc: 1 Apr 29 22:07:48.005: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:48.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:48.994: INFO: rc: 1 Apr 29 22:07:48.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:49.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:50.131: INFO: rc: 1 Apr 29 22:07:50.131: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:50.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:50.987: INFO: rc: 1 Apr 29 22:07:50.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:51.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:52.001: INFO: rc: 1 Apr 29 22:07:52.002: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:52.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:52.985: INFO: rc: 1 Apr 29 22:07:52.985: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:53.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:53.999: INFO: rc: 1 Apr 29 22:07:53.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:54.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:54.984: INFO: rc: 1 Apr 29 22:07:54.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:55.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:55.996: INFO: rc: 1 Apr 29 22:07:55.996: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:56.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:57.043: INFO: rc: 1 Apr 29 22:07:57.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:57.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:58.110: INFO: rc: 1 Apr 29 22:07:58.110: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:58.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:07:59.059: INFO: rc: 1 Apr 29 22:07:59.059: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:07:59.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:00.178: INFO: rc: 1 Apr 29 22:08:00.178: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:00.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:01.132: INFO: rc: 1 Apr 29 22:08:01.132: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:01.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:02.527: INFO: rc: 1 Apr 29 22:08:02.527: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:02.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:03.166: INFO: rc: 1 Apr 29 22:08:03.166: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:03.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:04.008: INFO: rc: 1 Apr 29 22:08:04.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:04.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:05.032: INFO: rc: 1 Apr 29 22:08:05.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:05.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:06.075: INFO: rc: 1 Apr 29 22:08:06.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:06.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:07.059: INFO: rc: 1 Apr 29 22:08:07.059: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:07.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:08.407: INFO: rc: 1 Apr 29 22:08:08.408: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:08.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:10.119: INFO: rc: 1 Apr 29 22:08:10.120: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:10.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:11.408: INFO: rc: 1 Apr 29 22:08:11.408: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:11.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:12.221: INFO: rc: 1 Apr 29 22:08:12.221: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:12.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:13.683: INFO: rc: 1 Apr 29 22:08:13.683: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:13.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:14.071: INFO: rc: 1 Apr 29 22:08:14.071: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:14.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:15.111: INFO: rc: 1 Apr 29 22:08:15.111: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:15.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:15.993: INFO: rc: 1 Apr 29 22:08:15.993: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:16.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:17.029: INFO: rc: 1 Apr 29 22:08:17.029: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:17.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:17.999: INFO: rc: 1 Apr 29 22:08:18.000: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:18.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:18.999: INFO: rc: 1 Apr 29 22:08:18.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:19.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079' Apr 29 22:08:19.245: INFO: rc: 1 Apr 29 22:08:19.245: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3767 exec execpod6gh7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31079: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31079 nc: connect to 10.10.190.207 port 31079 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Apr 29 22:08:19.246: FAIL: Unexpected error: <*errors.errorString | 0xc004ba0320>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31079 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31079 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00036d680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00036d680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00036d680, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Apr 29 22:08:19.247: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-3767". STEP: Found 17 events. Apr 29 22:08:19.275: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod6gh7z: { } Scheduled: Successfully assigned services-3767/execpod6gh7z to node1 Apr 29 22:08:19.275: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-2qv2w: { } Scheduled: Successfully assigned services-3767/externalname-service-2qv2w to node1 Apr 29 22:08:19.275: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-4lmb6: { } Scheduled: Successfully assigned services-3767/externalname-service-4lmb6 to node2 Apr 29 22:08:19.275: INFO: At 2022-04-29 22:06:03 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-2qv2w Apr 29 22:08:19.275: INFO: At 2022-04-29 22:06:03 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-4lmb6 Apr 29 22:08:19.275: INFO: At 2022-04-29 22:06:05 +0000 UTC - event for externalname-service-2qv2w: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:08:19.275: INFO: At 2022-04-29 22:06:05 +0000 UTC - event for externalname-service-2qv2w: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 282.810045ms Apr 29 22:08:19.275: INFO: At 2022-04-29 22:06:05 +0000 UTC - event for externalname-service-4lmb6: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:08:19.275: INFO: At 2022-04-29 22:06:05 +0000 UTC - event for externalname-service-4lmb6: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 285.861554ms Apr 29 22:08:19.275: INFO: At 2022-04-29 22:06:06 +0000 UTC - event for externalname-service-2qv2w: {kubelet node1} Started: Started container externalname-service Apr 29 22:08:19.275: INFO: At 2022-04-29 22:06:06 +0000 UTC - event for externalname-service-2qv2w: {kubelet node1} Created: Created container externalname-service Apr 29 22:08:19.275: INFO: At 2022-04-29 22:06:06 +0000 UTC - event for externalname-service-4lmb6: {kubelet node2} Started: Started container externalname-service Apr 29 22:08:19.275: INFO: At 2022-04-29 22:06:06 +0000 UTC - event for externalname-service-4lmb6: {kubelet node2} Created: Created container externalname-service Apr 29 22:08:19.275: INFO: At 2022-04-29 22:06:11 +0000 UTC - event for execpod6gh7z: {kubelet node1} Started: Started container agnhost-container Apr 29 22:08:19.275: INFO: At 2022-04-29 22:06:11 +0000 UTC - event for execpod6gh7z: {kubelet node1} Created: Created container agnhost-container Apr 29 22:08:19.275: INFO: At 2022-04-29 22:06:11 +0000 UTC - event for execpod6gh7z: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Apr 29 22:08:19.275: INFO: At 2022-04-29 22:06:11 +0000 UTC - event for execpod6gh7z: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 273.049229ms Apr 29 22:08:19.280: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:08:19.280: INFO: execpod6gh7z node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:06:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:06:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:06:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:06:09 +0000 UTC }] Apr 29 22:08:19.280: INFO: externalname-service-2qv2w node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:06:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:06:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:06:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:06:03 +0000 UTC }] Apr 29 22:08:19.280: INFO: externalname-service-4lmb6 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:06:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:06:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:06:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:06:03 +0000 UTC }] Apr 29 22:08:19.280: INFO: Apr 29 22:08:19.285: INFO: Logging node info for node master1 Apr 29 22:08:19.287: INFO: Node Info: &Node{ObjectMeta:{master1 c968c2e7-7594-4f6e-b85d-932008e8124f 49753 0 2022-04-29 19:57:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:57:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:05:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-04-29 20:08:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:16 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:16 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:16 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:08:16 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c3419fad4d2d4c5c9574e5b11ef92b4b,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:5e0f934f-c777-4827-ade6-efec15a825ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:08:19.288: INFO: Logging kubelet events for node master1 Apr 29 22:08:19.290: INFO: Logging pods the kubelet thinks is on node master1 Apr 29 22:08:19.309: INFO: kube-scheduler-master1 started at 2022-04-29 20:16:35 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.309: INFO: Container kube-scheduler ready: true, restart count 1 Apr 29 22:08:19.309: INFO: kube-proxy-9s46x started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.309: INFO: Container kube-proxy ready: true, restart count 1 Apr 29 22:08:19.309: INFO: kube-flannel-cskzh started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:08:19.309: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:08:19.309: INFO: Container kube-flannel ready: true, restart count 1 Apr 29 22:08:19.309: INFO: kube-multus-ds-amd64-w54d6 started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.309: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:08:19.309: INFO: node-feature-discovery-controller-cff799f9f-zpv5m started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.309: INFO: Container nfd-controller ready: true, restart count 0 Apr 29 22:08:19.309: INFO: node-exporter-svkqv started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:08:19.309: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:08:19.309: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:08:19.309: INFO: kube-apiserver-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.309: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:08:19.309: INFO: kube-controller-manager-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.310: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 29 22:08:19.310: INFO: coredns-8474476ff8-59qm6 started at 2022-04-29 20:00:39 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.310: INFO: Container coredns ready: true, restart count 1 Apr 29 22:08:19.310: INFO: container-registry-65d7c44b96-np5nk started at 2022-04-29 20:04:54 +0000 UTC (0+2 container statuses recorded) Apr 29 22:08:19.310: INFO: Container docker-registry ready: true, restart count 0 Apr 29 22:08:19.310: INFO: Container nginx ready: true, restart count 0 Apr 29 22:08:19.401: INFO: Latency metrics for node master1 Apr 29 22:08:19.401: INFO: Logging node info for node master2 Apr 29 22:08:19.403: INFO: Node Info: &Node{ObjectMeta:{master2 5b362581-f2d5-419c-a0b0-3aad7bec82f9 49781 0 2022-04-29 19:57:49 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:18 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:18 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:18 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:08:18 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d055250c7e194b8a9a572c232266a800,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fb9f32a4-f021-45dd-bddf-6f1d5ae9abae,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:08:19.404: INFO: Logging kubelet events for node master2 Apr 29 22:08:19.406: INFO: Logging pods the kubelet thinks is on node master2 Apr 29 22:08:19.421: INFO: kube-multus-ds-amd64-txslv started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.421: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:08:19.421: INFO: kube-apiserver-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.421: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:08:19.421: INFO: kube-proxy-4dnjw started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.421: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:08:19.421: INFO: kube-flannel-q2wgv started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:08:19.421: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:08:19.421: INFO: Container kube-flannel ready: true, restart count 1 Apr 29 22:08:19.421: INFO: coredns-8474476ff8-bg2wr started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.421: INFO: Container coredns ready: true, restart count 2 Apr 29 22:08:19.421: INFO: prometheus-operator-585ccfb458-q8r6q started at 2022-04-29 20:13:20 +0000 UTC (0+2 container statuses recorded) Apr 29 22:08:19.421: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:08:19.421: INFO: Container prometheus-operator ready: true, restart count 0 Apr 29 22:08:19.421: INFO: node-exporter-9rgc2 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:08:19.421: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:08:19.421: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:08:19.421: INFO: kube-controller-manager-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.421: INFO: Container kube-controller-manager ready: true, restart count 1 Apr 29 22:08:19.421: INFO: kube-scheduler-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.421: INFO: Container kube-scheduler ready: true, restart count 3 Apr 29 22:08:19.421: INFO: dns-autoscaler-7df78bfcfb-csfp5 started at 2022-04-29 20:00:43 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.421: INFO: Container autoscaler ready: true, restart count 1 Apr 29 22:08:19.504: INFO: Latency metrics for node master2 Apr 29 22:08:19.504: INFO: Logging node info for node master3 Apr 29 22:08:19.506: INFO: Node Info: &Node{ObjectMeta:{master3 1096e515-b559-4c90-b0f7-3398537b5f9e 49783 0 2022-04-29 19:58:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:58:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:16 +0000 UTC,LastTransitionTime:2022-04-29 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:18 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:18 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:18 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:08:18 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8955b376e6314525a9e533e277f5f4fb,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:6ffefaf4-8a5c-4288-a6a9-78ef35aa67ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:08:19.507: INFO: Logging kubelet events for node master3 Apr 29 22:08:19.509: INFO: Logging pods the kubelet thinks is on node master3 Apr 29 22:08:19.519: INFO: kube-apiserver-master3 started at 2022-04-29 19:58:29 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.519: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:08:19.519: INFO: kube-controller-manager-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.519: INFO: Container kube-controller-manager ready: true, restart count 3 Apr 29 22:08:19.519: INFO: kube-scheduler-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.519: INFO: Container kube-scheduler ready: true, restart count 2 Apr 29 22:08:19.519: INFO: kube-proxy-gs7qh started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.519: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:08:19.519: INFO: kube-flannel-g8w9b started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:08:19.519: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:08:19.519: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:08:19.519: INFO: kube-multus-ds-amd64-lxrlj started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.519: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:08:19.519: INFO: node-exporter-gdq6v started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:08:19.519: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:08:19.519: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:08:19.597: INFO: Latency metrics for node master3 Apr 29 22:08:19.597: INFO: Logging node info for node node1 Apr 29 22:08:19.600: INFO: Node Info: &Node{ObjectMeta:{node1 6842a10e-614a-46f0-b405-bc18936b0017 49650 0 2022-04-29 19:59:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 20:11:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:02:57 +0000 UTC,LastTransitionTime:2022-04-29 20:02:57 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:12 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:12 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:12 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:08:12 +0000 UTC,LastTransitionTime:2022-04-29 20:00:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2a0958eb1b3044f2963c9e5f2e902173,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fc6a2d14-7726-4aec-9428-6617632ddcbe,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:08:19.600: INFO: Logging kubelet events for node node1 Apr 29 22:08:19.603: INFO: Logging pods the kubelet thinks is on node node1 Apr 29 22:08:19.621: INFO: nginx-proxy-node1 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.621: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:08:19.621: INFO: cmk-f5znp started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded) Apr 29 22:08:19.621: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:08:19.621: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:08:19.621: INFO: simpletest.rc-rfwkt started at 2022-04-29 22:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.621: INFO: Container nginx ready: true, restart count 0 Apr 29 22:08:19.621: INFO: simpletest.rc-wtr8s started at 2022-04-29 22:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.621: INFO: Container nginx ready: true, restart count 0 Apr 29 22:08:19.621: INFO: simpletest.rc-8dcdp started at 2022-04-29 22:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.621: INFO: Container nginx ready: true, restart count 0 Apr 29 22:08:19.621: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.621: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:08:19.621: INFO: cmk-init-discover-node1-gxlbt started at 2022-04-29 20:11:43 +0000 UTC (0+3 container statuses recorded) Apr 29 22:08:19.621: INFO: Container discover ready: false, restart count 0 Apr 29 22:08:19.621: INFO: Container init ready: false, restart count 0 Apr 29 22:08:19.621: INFO: Container install ready: false, restart count 0 Apr 29 22:08:19.621: INFO: prometheus-k8s-0 started at 2022-04-29 20:13:38 +0000 UTC (0+4 container statuses recorded) Apr 29 22:08:19.621: INFO: Container config-reloader ready: true, restart count 0 Apr 29 22:08:19.621: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 22:08:19.621: INFO: Container grafana ready: true, restart count 0 Apr 29 22:08:19.621: INFO: Container prometheus ready: true, restart count 1 Apr 29 22:08:19.621: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 started at 2022-04-29 20:16:34 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.621: INFO: Container tas-extender ready: true, restart count 0 Apr 29 22:08:19.621: INFO: kube-proxy-v9tgj started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.621: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:08:19.621: INFO: node-exporter-c8777 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:08:19.622: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:08:19.622: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:08:19.622: INFO: ss2-0 started at 2022-04-29 22:08:05 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.622: INFO: Container webserver ready: true, restart count 0 Apr 29 22:08:19.622: INFO: kube-flannel-47phs started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:08:19.622: INFO: Init container install-cni ready: true, restart count 2 Apr 29 22:08:19.622: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:08:19.622: INFO: collectd-ccgw2 started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded) Apr 29 22:08:19.622: INFO: Container collectd ready: true, restart count 0 Apr 29 22:08:19.622: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:08:19.622: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:08:19.622: INFO: execpod6gh7z started at 2022-04-29 22:06:09 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.622: INFO: Container agnhost-container ready: true, restart count 0 Apr 29 22:08:19.622: INFO: kube-multus-ds-amd64-kkz4q started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.622: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:08:19.622: INFO: simpletest.rc-flxv2 started at 2022-04-29 22:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.622: INFO: Container nginx ready: true, restart count 0 Apr 29 22:08:19.622: INFO: pod-projected-configmaps-cedef3ac-50de-4929-8c2b-8a5b5db21252 started at 2022-04-29 22:08:05 +0000 UTC (0+3 container statuses recorded) Apr 29 22:08:19.622: INFO: Container createcm-volume-test ready: false, restart count 0 Apr 29 22:08:19.622: INFO: Container delcm-volume-test ready: false, restart count 0 Apr 29 22:08:19.622: INFO: Container updcm-volume-test ready: false, restart count 0 Apr 29 22:08:19.622: INFO: node-feature-discovery-worker-kbl9s started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.622: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:08:19.622: INFO: agnhost-replica-6bcf79b489-lvnsf started at 2022-04-29 22:08:00 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.622: INFO: Container replica ready: true, restart count 0 Apr 29 22:08:19.622: INFO: kubernetes-dashboard-785dcbb76d-d2k5n started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.622: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 22:08:19.622: INFO: externalname-service-2qv2w started at 2022-04-29 22:06:03 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.622: INFO: Container externalname-service ready: true, restart count 0 Apr 29 22:08:19.622: INFO: simpletest.rc-kqr6t started at 2022-04-29 22:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.622: INFO: Container nginx ready: true, restart count 0 Apr 29 22:08:19.622: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.622: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 22:08:19.864: INFO: Latency metrics for node node1 Apr 29 22:08:19.864: INFO: Logging node info for node node2 Apr 29 22:08:19.867: INFO: Node Info: &Node{ObjectMeta:{node2 2f399869-e81b-465d-97b4-806b6186d34a 49747 0 2022-04-29 19:59:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:12:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 20:12:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:12 +0000 UTC,LastTransitionTime:2022-04-29 20:03:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:15 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:15 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:08:15 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:08:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:22c763056cc24e6ba6e8bbadb5113d3d,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:8ca050bd-5d8a-4c59-8e02-41e26864aa92,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:08:19.867: INFO: Logging kubelet events for node node2 Apr 29 22:08:19.869: INFO: Logging pods the kubelet thinks is on node node2 Apr 29 22:08:19.885: INFO: kube-flannel-dbcj8 started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Init container install-cni ready: true, restart count 2 Apr 29 22:08:19.885: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 22:08:19.885: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:08:19.885: INFO: cmk-74bh9 started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded) Apr 29 22:08:19.885: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:08:19.885: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:08:19.885: INFO: node-exporter-tlpmt started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:08:19.885: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:08:19.885: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:08:19.885: INFO: frontend-685fc574d5-652h7 started at 2022-04-29 22:08:00 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container guestbook-frontend ready: true, restart count 0 Apr 29 22:08:19.885: INFO: kube-proxy-k6tv2 started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:08:19.885: INFO: forbid-27521165-t6zxx started at 2022-04-29 22:05:00 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container c ready: true, restart count 0 Apr 29 22:08:19.885: INFO: cmk-webhook-6c9d5f8578-b9mdv started at 2022-04-29 20:12:26 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 22:08:19.885: INFO: agnhost-primary-5db8ddd565-s97m8 started at 2022-04-29 22:08:00 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container primary ready: true, restart count 0 Apr 29 22:08:19.885: INFO: simpletest.rc-hhrq6 started at 2022-04-29 22:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container nginx ready: true, restart count 0 Apr 29 22:08:19.885: INFO: simpletest.rc-q4gbm started at 2022-04-29 22:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container nginx ready: true, restart count 0 Apr 29 22:08:19.885: INFO: node-feature-discovery-worker-jtjjb started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:08:19.885: INFO: test-pod started at 2022-04-29 22:05:34 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container webserver ready: true, restart count 0 Apr 29 22:08:19.885: INFO: liveness-fc850a46-b0ca-4e19-b7f1-0098d0c3504c started at 2022-04-29 22:05:55 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container agnhost-container ready: false, restart count 4 Apr 29 22:08:19.885: INFO: agnhost-replica-6bcf79b489-t7gts started at 2022-04-29 22:08:00 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container replica ready: true, restart count 0 Apr 29 22:08:19.885: INFO: simpletest.rc-vntpc started at 2022-04-29 22:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container nginx ready: true, restart count 0 Apr 29 22:08:19.885: INFO: kube-multus-ds-amd64-7slcd started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:08:19.885: INFO: collectd-zxs8j started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded) Apr 29 22:08:19.885: INFO: Container collectd ready: true, restart count 0 Apr 29 22:08:19.885: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:08:19.885: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:08:19.885: INFO: externalname-service-4lmb6 started at 2022-04-29 22:06:03 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container externalname-service ready: true, restart count 0 Apr 29 22:08:19.885: INFO: cmk-init-discover-node2-csdn7 started at 2022-04-29 20:12:03 +0000 UTC (0+3 container statuses recorded) Apr 29 22:08:19.885: INFO: Container discover ready: false, restart count 0 Apr 29 22:08:19.885: INFO: Container init ready: false, restart count 0 Apr 29 22:08:19.885: INFO: Container install ready: false, restart count 0 Apr 29 22:08:19.885: INFO: frontend-685fc574d5-wbrkn started at 2022-04-29 22:08:00 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container guestbook-frontend ready: true, restart count 0 Apr 29 22:08:19.885: INFO: simpletest.rc-pdf4c started at 2022-04-29 22:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container nginx ready: false, restart count 0 Apr 29 22:08:19.885: INFO: nginx-proxy-node2 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:08:19.885: INFO: frontend-685fc574d5-2d4px started at 2022-04-29 22:07:59 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container guestbook-frontend ready: true, restart count 0 Apr 29 22:08:19.885: INFO: simpletest.rc-2jj4t started at 2022-04-29 22:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container nginx ready: true, restart count 0 Apr 29 22:08:19.885: INFO: ss2-1 started at (0+0 container statuses recorded) Apr 29 22:08:19.885: INFO: test-recreate-deployment-85d47dcb4-zrhbz started at (0+0 container statuses recorded) Apr 29 22:08:19.885: INFO: concurrent-27521167-6hc6p started at 2022-04-29 22:07:00 +0000 UTC (0+1 container statuses recorded) Apr 29 22:08:19.885: INFO: Container c ready: true, restart count 0 Apr 29 22:08:20.444: INFO: Latency metrics for node node2 Apr 29 22:08:20.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3767" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [136.938 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:08:19.246: Unexpected error: <*errors.errorString | 0xc004ba0320>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31079 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31079 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":10,"skipped":182,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} Apr 29 22:08:20.463: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:07:58.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components Apr 29 22:07:58.598: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Apr 29 22:07:58.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2573 create -f -' Apr 29 22:07:59.002: INFO: stderr: "" Apr 29 22:07:59.002: INFO: stdout: "service/agnhost-replica created\n" Apr 29 22:07:59.002: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Apr 29 22:07:59.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2573 create -f -' Apr 29 22:07:59.312: INFO: stderr: "" Apr 29 22:07:59.312: INFO: stdout: "service/agnhost-primary created\n" Apr 29 22:07:59.312: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 29 22:07:59.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2573 create -f -' Apr 29 22:07:59.632: INFO: stderr: "" Apr 29 22:07:59.632: INFO: stdout: "service/frontend created\n" Apr 29 22:07:59.632: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 29 22:07:59.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2573 create -f -' Apr 29 22:07:59.975: INFO: stderr: "" Apr 29 22:07:59.975: INFO: stdout: "deployment.apps/frontend created\n" Apr 29 22:07:59.976: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 29 22:07:59.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2573 create -f -' Apr 29 22:08:00.312: INFO: stderr: "" Apr 29 22:08:00.312: INFO: stdout: "deployment.apps/agnhost-primary created\n" Apr 29 22:08:00.312: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 29 22:08:00.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2573 create -f -' Apr 29 22:08:00.632: INFO: stderr: "" Apr 29 22:08:00.632: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Apr 29 22:08:00.632: INFO: Waiting for all frontend pods to be Running. Apr 29 22:08:20.685: INFO: Waiting for frontend to serve content. Apr 29 22:08:20.693: INFO: Trying to add a new entry to the guestbook. Apr 29 22:08:20.702: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 29 22:08:20.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2573 delete --grace-period=0 --force -f -' Apr 29 22:08:20.850: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 22:08:20.850: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Apr 29 22:08:20.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2573 delete --grace-period=0 --force -f -' Apr 29 22:08:20.984: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 22:08:20.984: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Apr 29 22:08:20.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2573 delete --grace-period=0 --force -f -' Apr 29 22:08:21.115: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 22:08:21.115: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 29 22:08:21.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2573 delete --grace-period=0 --force -f -' Apr 29 22:08:21.238: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 22:08:21.238: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 29 22:08:21.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2573 delete --grace-period=0 --force -f -' Apr 29 22:08:21.364: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 22:08:21.364: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Apr 29 22:08:21.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2573 delete --grace-period=0 --force -f -' Apr 29 22:08:21.495: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 22:08:21.495: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:08:21.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2573" for this suite. • [SLOW TEST:22.924 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":40,"skipped":827,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} Apr 29 22:08:21.506: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:08:05.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-d875efc2-4df9-4dd6-b05d-db7f22691d07 STEP: Creating configMap with name cm-test-opt-upd-5873c9a8-4c33-49c7-9cb9-048d9dfc7b48 STEP: Creating the pod Apr 29 22:08:05.431: INFO: The status of Pod pod-projected-configmaps-cedef3ac-50de-4929-8c2b-8a5b5db21252 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:08:07.435: INFO: The status of Pod pod-projected-configmaps-cedef3ac-50de-4929-8c2b-8a5b5db21252 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:08:09.436: INFO: The status of Pod pod-projected-configmaps-cedef3ac-50de-4929-8c2b-8a5b5db21252 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:08:11.435: INFO: The status of Pod pod-projected-configmaps-cedef3ac-50de-4929-8c2b-8a5b5db21252 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:08:13.435: INFO: The status of Pod pod-projected-configmaps-cedef3ac-50de-4929-8c2b-8a5b5db21252 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:08:15.436: INFO: The status of Pod pod-projected-configmaps-cedef3ac-50de-4929-8c2b-8a5b5db21252 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:08:17.436: INFO: The status of Pod pod-projected-configmaps-cedef3ac-50de-4929-8c2b-8a5b5db21252 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:08:19.435: INFO: The status of Pod pod-projected-configmaps-cedef3ac-50de-4929-8c2b-8a5b5db21252 is Pending, waiting for it to be Running (with Ready = true) Apr 29 22:08:21.434: INFO: The status of Pod pod-projected-configmaps-cedef3ac-50de-4929-8c2b-8a5b5db21252 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-d875efc2-4df9-4dd6-b05d-db7f22691d07 STEP: Updating configmap cm-test-opt-upd-5873c9a8-4c33-49c7-9cb9-048d9dfc7b48 STEP: Creating configMap with name cm-test-opt-create-e0f07178-8fd9-4268-9bb1-afa4261eb2da STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:08:23.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5678" for this suite. • [SLOW TEST:18.119 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":485,"failed":0} Apr 29 22:08:23.507: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":21,"skipped":232,"failed":0} [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:55.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-fc850a46-b0ca-4e19-b7f1-0098d0c3504c in namespace container-probe-6289 Apr 29 22:05:59.343: INFO: Started pod liveness-fc850a46-b0ca-4e19-b7f1-0098d0c3504c in namespace container-probe-6289 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 22:05:59.347: INFO: Initial restart count of pod liveness-fc850a46-b0ca-4e19-b7f1-0098d0c3504c is 0 Apr 29 22:06:17.389: INFO: Restart count of pod container-probe-6289/liveness-fc850a46-b0ca-4e19-b7f1-0098d0c3504c is now 1 (18.042125602s elapsed) Apr 29 22:06:37.433: INFO: Restart count of pod container-probe-6289/liveness-fc850a46-b0ca-4e19-b7f1-0098d0c3504c is now 2 (38.085841667s elapsed) Apr 29 22:06:59.475: INFO: Restart count of pod container-probe-6289/liveness-fc850a46-b0ca-4e19-b7f1-0098d0c3504c is now 3 (1m0.128377045s elapsed) Apr 29 22:07:17.510: INFO: Restart count of pod container-probe-6289/liveness-fc850a46-b0ca-4e19-b7f1-0098d0c3504c is now 4 (1m18.163479348s elapsed) Apr 29 22:08:29.648: INFO: Restart count of pod container-probe-6289/liveness-fc850a46-b0ca-4e19-b7f1-0098d0c3504c is now 5 (2m30.301187325s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:08:29.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6289" for this suite. • [SLOW TEST:154.362 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":232,"failed":0} Apr 29 22:08:29.669: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:08:04.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics Apr 29 22:08:44.677: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Apr 29 22:08:44.741: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Apr 29 22:08:44.741: INFO: Deleting pod "simpletest.rc-2jj4t" in namespace "gc-8758" Apr 29 22:08:44.773: INFO: Deleting pod "simpletest.rc-8dcdp" in namespace "gc-8758" Apr 29 22:08:44.780: INFO: Deleting pod "simpletest.rc-flxv2" in namespace "gc-8758" Apr 29 22:08:44.787: INFO: Deleting pod "simpletest.rc-hhrq6" in namespace "gc-8758" Apr 29 22:08:44.793: INFO: Deleting pod "simpletest.rc-kqr6t" in namespace "gc-8758" Apr 29 22:08:44.800: INFO: Deleting pod "simpletest.rc-pdf4c" in namespace "gc-8758" Apr 29 22:08:44.807: INFO: Deleting pod "simpletest.rc-q4gbm" in namespace "gc-8758" Apr 29 22:08:44.815: INFO: Deleting pod "simpletest.rc-rfwkt" in namespace "gc-8758" Apr 29 22:08:44.821: INFO: Deleting pod "simpletest.rc-vntpc" in namespace "gc-8758" Apr 29 22:08:44.828: INFO: Deleting pod "simpletest.rc-wtr8s" in namespace "gc-8758" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:08:44.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8758" for this suite. • [SLOW TEST:40.287 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":40,"skipped":625,"failed":0} Apr 29 22:08:44.844: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:08:05.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-292 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Apr 29 22:08:05.544: INFO: Found 0 stateful pods, waiting for 3 Apr 29 22:08:15.547: INFO: Found 1 stateful pods, waiting for 3 Apr 29 22:08:25.548: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 29 22:08:25.548: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 29 22:08:25.548: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 29 22:08:35.548: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 29 22:08:35.548: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 29 22:08:35.548: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Apr 29 22:08:35.573: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 29 22:08:45.600: INFO: Updating stateful set ss2 Apr 29 22:08:45.605: INFO: Waiting for Pod statefulset-292/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted Apr 29 22:08:55.628: INFO: Found 1 stateful pods, waiting for 3 Apr 29 22:09:05.631: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 29 22:09:05.631: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 29 22:09:05.631: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 29 22:09:05.654: INFO: Updating stateful set ss2 Apr 29 22:09:05.659: INFO: Waiting for Pod statefulset-292/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 29 22:09:15.684: INFO: Updating stateful set ss2 Apr 29 22:09:15.690: INFO: Waiting for StatefulSet statefulset-292/ss2 to complete update Apr 29 22:09:15.690: INFO: Waiting for Pod statefulset-292/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Apr 29 22:09:25.696: INFO: Waiting for StatefulSet statefulset-292/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Apr 29 22:09:35.695: INFO: Deleting all statefulset in ns statefulset-292 Apr 29 22:09:35.698: INFO: Scaling statefulset ss2 to 0 Apr 29 22:09:55.711: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 22:09:55.714: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:09:55.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-292" for this suite. • [SLOW TEST:110.217 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":38,"skipped":663,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} Apr 29 22:09:55.736: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:04:09.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0429 22:04:09.160084 35 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:10:01.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-9464" for this suite. • [SLOW TEST:352.066 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":37,"skipped":822,"failed":0} Apr 29 22:10:01.191: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:05:34.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-707 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-707 STEP: Creating statefulset with conflicting port in namespace statefulset-707 STEP: Waiting until pod test-pod will start running in namespace statefulset-707 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-707 Apr 29 22:10:40.872: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000e9fe00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000e9fe00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000e9fe00, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Apr 29 22:10:40.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-707 describe po test-pod' Apr 29 22:10:41.070: INFO: stderr: "" Apr 29 22:10:41.070: INFO: stdout: "Name: test-pod\nNamespace: statefulset-707\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Fri, 29 Apr 2022 22:05:34 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.186\"\n ],\n \"mac\": \"06:bc:31:4a:8c:9e\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.186\"\n ],\n \"mac\": \"06:bc:31:4a:8c:9e\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.4.186\nIPs:\n IP: 10.244.4.186\nContainers:\n webserver:\n Container ID: docker://18c41664b0d35b99a92e560449de386f0105ddc6096e0ef26f29f5b432839178\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Fri, 29 Apr 2022 22:05:37 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wkc9k (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-wkc9k:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulling 5m4s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n Normal Pulled 5m4s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" in 337.42843ms\n Normal Created 5m4s kubelet Created container webserver\n Normal Started 5m4s kubelet Started container webserver\n" Apr 29 22:10:41.070: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-707 Priority: 0 Node: node2/10.10.190.208 Start Time: Fri, 29 Apr 2022 22:05:34 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.186" ], "mac": "06:bc:31:4a:8c:9e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.186" ], "mac": "06:bc:31:4a:8c:9e", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.4.186 IPs: IP: 10.244.4.186 Containers: webserver: Container ID: docker://18c41664b0d35b99a92e560449de386f0105ddc6096e0ef26f29f5b432839178 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Fri, 29 Apr 2022 22:05:37 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wkc9k (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-wkc9k: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m4s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Normal Pulled 5m4s kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 337.42843ms Normal Created 5m4s kubelet Created container webserver Normal Started 5m4s kubelet Started container webserver Apr 29 22:10:41.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-707 logs test-pod --tail=100' Apr 29 22:10:41.223: INFO: stderr: "" Apr 29 22:10:41.223: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.186. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.186. Set the 'ServerName' directive globally to suppress this message\n[Fri Apr 29 22:05:37.891386 2022] [mpm_event:notice] [pid 1:tid 140497210944360] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Apr 29 22:05:37.891430 2022] [core:notice] [pid 1:tid 140497210944360] AH00094: Command line: 'httpd -D FOREGROUND'\n" Apr 29 22:10:41.223: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.186. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.186. Set the 'ServerName' directive globally to suppress this message [Fri Apr 29 22:05:37.891386 2022] [mpm_event:notice] [pid 1:tid 140497210944360] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Fri Apr 29 22:05:37.891430 2022] [core:notice] [pid 1:tid 140497210944360] AH00094: Command line: 'httpd -D FOREGROUND' Apr 29 22:10:41.223: INFO: Deleting all statefulset in ns statefulset-707 Apr 29 22:10:41.226: INFO: Scaling statefulset ss to 0 Apr 29 22:10:41.235: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 22:10:51.245: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "statefulset-707". STEP: Found 7 events. Apr 29 22:10:51.256: INFO: At 2022-04-29 22:05:34 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] Apr 29 22:10:51.256: INFO: At 2022-04-29 22:05:34 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] Apr 29 22:10:51.256: INFO: At 2022-04-29 22:05:34 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100]] Apr 29 22:10:51.256: INFO: At 2022-04-29 22:05:37 +0000 UTC - event for test-pod: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Apr 29 22:10:51.256: INFO: At 2022-04-29 22:05:37 +0000 UTC - event for test-pod: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 337.42843ms Apr 29 22:10:51.257: INFO: At 2022-04-29 22:05:37 +0000 UTC - event for test-pod: {kubelet node2} Created: Created container webserver Apr 29 22:10:51.257: INFO: At 2022-04-29 22:05:37 +0000 UTC - event for test-pod: {kubelet node2} Started: Started container webserver Apr 29 22:10:51.259: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 22:10:51.259: INFO: test-pod node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:05:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:05:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:05:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 22:05:34 +0000 UTC }] Apr 29 22:10:51.259: INFO: Apr 29 22:10:51.263: INFO: Logging node info for node master1 Apr 29 22:10:51.265: INFO: Node Info: &Node{ObjectMeta:{master1 c968c2e7-7594-4f6e-b85d-932008e8124f 50872 0 2022-04-29 19:57:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:57:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:05:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-04-29 20:08:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:46 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:46 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:46 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:10:46 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c3419fad4d2d4c5c9574e5b11ef92b4b,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:5e0f934f-c777-4827-ade6-efec15a825ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:10:51.266: INFO: Logging kubelet events for node master1 Apr 29 22:10:51.268: INFO: Logging pods the kubelet thinks is on node master1 Apr 29 22:10:51.294: INFO: kube-proxy-9s46x started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.295: INFO: Container kube-proxy ready: true, restart count 1 Apr 29 22:10:51.295: INFO: kube-flannel-cskzh started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:10:51.295: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:10:51.295: INFO: Container kube-flannel ready: true, restart count 1 Apr 29 22:10:51.295: INFO: kube-multus-ds-amd64-w54d6 started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.295: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:10:51.295: INFO: node-feature-discovery-controller-cff799f9f-zpv5m started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.295: INFO: Container nfd-controller ready: true, restart count 0 Apr 29 22:10:51.295: INFO: node-exporter-svkqv started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:10:51.295: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:10:51.295: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:10:51.295: INFO: kube-apiserver-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.295: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:10:51.295: INFO: kube-controller-manager-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.295: INFO: Container kube-controller-manager ready: true, restart count 2 Apr 29 22:10:51.295: INFO: kube-scheduler-master1 started at 2022-04-29 20:16:35 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.295: INFO: Container kube-scheduler ready: true, restart count 1 Apr 29 22:10:51.295: INFO: coredns-8474476ff8-59qm6 started at 2022-04-29 20:00:39 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.295: INFO: Container coredns ready: true, restart count 1 Apr 29 22:10:51.295: INFO: container-registry-65d7c44b96-np5nk started at 2022-04-29 20:04:54 +0000 UTC (0+2 container statuses recorded) Apr 29 22:10:51.295: INFO: Container docker-registry ready: true, restart count 0 Apr 29 22:10:51.295: INFO: Container nginx ready: true, restart count 0 Apr 29 22:10:51.391: INFO: Latency metrics for node master1 Apr 29 22:10:51.391: INFO: Logging node info for node master2 Apr 29 22:10:51.394: INFO: Node Info: &Node{ObjectMeta:{master2 5b362581-f2d5-419c-a0b0-3aad7bec82f9 50876 0 2022-04-29 19:57:49 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:48 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:48 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:48 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:10:48 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d055250c7e194b8a9a572c232266a800,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fb9f32a4-f021-45dd-bddf-6f1d5ae9abae,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:10:51.394: INFO: Logging kubelet events for node master2 Apr 29 22:10:51.396: INFO: Logging pods the kubelet thinks is on node master2 Apr 29 22:10:51.410: INFO: kube-proxy-4dnjw started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.410: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:10:51.410: INFO: kube-flannel-q2wgv started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:10:51.410: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:10:51.410: INFO: Container kube-flannel ready: true, restart count 1 Apr 29 22:10:51.410: INFO: kube-multus-ds-amd64-txslv started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.410: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:10:51.410: INFO: kube-apiserver-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.410: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:10:51.410: INFO: kube-scheduler-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.410: INFO: Container kube-scheduler ready: true, restart count 3 Apr 29 22:10:51.410: INFO: dns-autoscaler-7df78bfcfb-csfp5 started at 2022-04-29 20:00:43 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.410: INFO: Container autoscaler ready: true, restart count 1 Apr 29 22:10:51.410: INFO: coredns-8474476ff8-bg2wr started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.410: INFO: Container coredns ready: true, restart count 2 Apr 29 22:10:51.410: INFO: prometheus-operator-585ccfb458-q8r6q started at 2022-04-29 20:13:20 +0000 UTC (0+2 container statuses recorded) Apr 29 22:10:51.410: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:10:51.410: INFO: Container prometheus-operator ready: true, restart count 0 Apr 29 22:10:51.410: INFO: node-exporter-9rgc2 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:10:51.410: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:10:51.410: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:10:51.410: INFO: kube-controller-manager-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.410: INFO: Container kube-controller-manager ready: true, restart count 1 Apr 29 22:10:51.502: INFO: Latency metrics for node master2 Apr 29 22:10:51.502: INFO: Logging node info for node master3 Apr 29 22:10:51.505: INFO: Node Info: &Node{ObjectMeta:{master3 1096e515-b559-4c90-b0f7-3398537b5f9e 50877 0 2022-04-29 19:58:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-04-29 19:58:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:16 +0000 UTC,LastTransitionTime:2022-04-29 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:49 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:49 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:49 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:10:49 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8955b376e6314525a9e533e277f5f4fb,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:6ffefaf4-8a5c-4288-a6a9-78ef35aa67ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:10:51.505: INFO: Logging kubelet events for node master3 Apr 29 22:10:51.507: INFO: Logging pods the kubelet thinks is on node master3 Apr 29 22:10:51.521: INFO: kube-apiserver-master3 started at 2022-04-29 19:58:29 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.521: INFO: Container kube-apiserver ready: true, restart count 0 Apr 29 22:10:51.521: INFO: kube-controller-manager-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.521: INFO: Container kube-controller-manager ready: true, restart count 3 Apr 29 22:10:51.521: INFO: kube-scheduler-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.521: INFO: Container kube-scheduler ready: true, restart count 2 Apr 29 22:10:51.521: INFO: kube-proxy-gs7qh started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.521: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:10:51.521: INFO: kube-flannel-g8w9b started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:10:51.521: INFO: Init container install-cni ready: true, restart count 0 Apr 29 22:10:51.521: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:10:51.521: INFO: kube-multus-ds-amd64-lxrlj started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.521: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:10:51.521: INFO: node-exporter-gdq6v started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:10:51.521: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:10:51.521: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:10:51.602: INFO: Latency metrics for node master3 Apr 29 22:10:51.602: INFO: Logging node info for node node1 Apr 29 22:10:51.605: INFO: Node Info: &Node{ObjectMeta:{node1 6842a10e-614a-46f0-b405-bc18936b0017 50861 0 2022-04-29 19:59:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 20:11:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:02:57 +0000 UTC,LastTransitionTime:2022-04-29 20:02:57 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:44 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:44 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:44 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:10:44 +0000 UTC,LastTransitionTime:2022-04-29 20:00:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2a0958eb1b3044f2963c9e5f2e902173,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fc6a2d14-7726-4aec-9428-6617632ddcbe,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:10:51.606: INFO: Logging kubelet events for node node1 Apr 29 22:10:51.609: INFO: Logging pods the kubelet thinks is on node node1 Apr 29 22:10:51.628: INFO: nginx-proxy-node1 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.628: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:10:51.628: INFO: cmk-f5znp started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded) Apr 29 22:10:51.628: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:10:51.628: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:10:51.628: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.628: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:10:51.628: INFO: cmk-init-discover-node1-gxlbt started at 2022-04-29 20:11:43 +0000 UTC (0+3 container statuses recorded) Apr 29 22:10:51.628: INFO: Container discover ready: false, restart count 0 Apr 29 22:10:51.628: INFO: Container init ready: false, restart count 0 Apr 29 22:10:51.628: INFO: Container install ready: false, restart count 0 Apr 29 22:10:51.628: INFO: prometheus-k8s-0 started at 2022-04-29 20:13:38 +0000 UTC (0+4 container statuses recorded) Apr 29 22:10:51.628: INFO: Container config-reloader ready: true, restart count 0 Apr 29 22:10:51.628: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 22:10:51.628: INFO: Container grafana ready: true, restart count 0 Apr 29 22:10:51.628: INFO: Container prometheus ready: true, restart count 1 Apr 29 22:10:51.628: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 started at 2022-04-29 20:16:34 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.628: INFO: Container tas-extender ready: true, restart count 0 Apr 29 22:10:51.628: INFO: kube-proxy-v9tgj started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.629: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:10:51.629: INFO: node-exporter-c8777 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:10:51.629: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:10:51.629: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:10:51.629: INFO: kube-flannel-47phs started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:10:51.629: INFO: Init container install-cni ready: true, restart count 2 Apr 29 22:10:51.629: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:10:51.629: INFO: collectd-ccgw2 started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded) Apr 29 22:10:51.629: INFO: Container collectd ready: true, restart count 0 Apr 29 22:10:51.629: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:10:51.629: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:10:51.629: INFO: kube-multus-ds-amd64-kkz4q started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.629: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:10:51.629: INFO: node-feature-discovery-worker-kbl9s started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.629: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:10:51.629: INFO: kubernetes-dashboard-785dcbb76d-d2k5n started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.629: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 22:10:51.629: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.629: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 22:10:51.782: INFO: Latency metrics for node node1 Apr 29 22:10:51.782: INFO: Logging node info for node node2 Apr 29 22:10:51.785: INFO: Node Info: &Node{ObjectMeta:{node2 2f399869-e81b-465d-97b4-806b6186d34a 50873 0 2022-04-29 19:59:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:12:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 20:12:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:12 +0000 UTC,LastTransitionTime:2022-04-29 20:03:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:47 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:47 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 22:10:47 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 22:10:47 +0000 UTC,LastTransitionTime:2022-04-29 20:03:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:22c763056cc24e6ba6e8bbadb5113d3d,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:8ca050bd-5d8a-4c59-8e02-41e26864aa92,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 29 22:10:51.787: INFO: Logging kubelet events for node node2 Apr 29 22:10:51.789: INFO: Logging pods the kubelet thinks is on node node2 Apr 29 22:10:51.799: INFO: kube-proxy-k6tv2 started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.799: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:10:51.799: INFO: kube-flannel-dbcj8 started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded) Apr 29 22:10:51.799: INFO: Init container install-cni ready: true, restart count 2 Apr 29 22:10:51.799: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 22:10:51.799: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.799: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:10:51.799: INFO: cmk-74bh9 started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded) Apr 29 22:10:51.799: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:10:51.799: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:10:51.799: INFO: node-exporter-tlpmt started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded) Apr 29 22:10:51.799: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:10:51.799: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:10:51.799: INFO: cmk-webhook-6c9d5f8578-b9mdv started at 2022-04-29 20:12:26 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.799: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 22:10:51.799: INFO: node-feature-discovery-worker-jtjjb started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.799: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:10:51.799: INFO: kube-multus-ds-amd64-7slcd started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.799: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:10:51.799: INFO: test-pod started at 2022-04-29 22:05:34 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.799: INFO: Container webserver ready: true, restart count 0 Apr 29 22:10:51.799: INFO: cmk-init-discover-node2-csdn7 started at 2022-04-29 20:12:03 +0000 UTC (0+3 container statuses recorded) Apr 29 22:10:51.799: INFO: Container discover ready: false, restart count 0 Apr 29 22:10:51.799: INFO: Container init ready: false, restart count 0 Apr 29 22:10:51.799: INFO: Container install ready: false, restart count 0 Apr 29 22:10:51.799: INFO: collectd-zxs8j started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded) Apr 29 22:10:51.799: INFO: Container collectd ready: true, restart count 0 Apr 29 22:10:51.799: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:10:51.799: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:10:51.799: INFO: nginx-proxy-node2 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded) Apr 29 22:10:51.799: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:10:51.928: INFO: Latency metrics for node node2 Apr 29 22:10:51.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-707" for this suite. • Failure [317.117 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:10:40.873: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":13,"skipped":222,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} Apr 29 22:10:51.942: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":25,"skipped":334,"failed":0} Apr 29 22:08:18.885: INFO: Running AfterSuite actions on all nodes Apr 29 22:10:52.012: INFO: Running AfterSuite actions on node 1 Apr 29 22:10:52.012: INFO: Skipping dumping logs from cluster Summarizing 6 Failures: [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 [Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 Ran 320 of 5773 Specs in 789.945 seconds FAIL! -- 314 Passed | 6 Failed | 0 Pending | 5453 Skipped Ginkgo ran 1 suite in 13m11.573148662s Test Suite Failed