Running Suite: Kubernetes e2e suite =================================== Random Seed: 1635555156 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Oct 30 00:52:38.522: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:52:38.527: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 30 00:52:38.557: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 00:52:38.621: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 00:52:38.621: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 00:52:38.621: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 00:52:38.621: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 00:52:38.621: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 30 00:52:38.633: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 30 00:52:38.633: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 30 00:52:38.633: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 30 00:52:38.633: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 30 00:52:38.633: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 30 00:52:38.633: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 30 00:52:38.633: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 30 00:52:38.633: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 30 00:52:38.633: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 30 00:52:38.633: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 30 00:52:38.633: INFO: e2e test version: v1.21.5 Oct 30 00:52:38.634: INFO: kube-apiserver version: v1.21.1 Oct 30 00:52:38.634: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:52:38.640: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Oct 30 00:52:38.639: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:52:38.661: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Oct 30 00:52:38.646: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:52:38.667: INFO: Cluster IP family: ipv4 S ------------------------------ Oct 30 00:52:38.645: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:52:38.667: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Oct 30 00:52:38.663: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:52:38.687: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Oct 30 00:52:38.669: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:52:38.691: INFO: Cluster IP family: ipv4 Oct 30 00:52:38.669: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:52:38.692: INFO: Cluster IP family: ipv4 SS ------------------------------ Oct 30 00:52:38.671: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:52:38.693: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Oct 30 00:52:38.675: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:52:38.697: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ Oct 30 00:52:38.675: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:52:38.704: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:38.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition W1030 00:52:38.711333 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 00:52:38.711: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 00:52:38.715: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:38.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9985" for this suite. •SSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:38.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress W1030 00:52:38.735759 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 00:52:38.735: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 00:52:38.738: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 30 00:52:38.771: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 30 00:52:38.774: INFO: starting watch STEP: patching STEP: updating Oct 30 00:52:38.784: INFO: waiting for watch events with expected annotations Oct 30 00:52:38.784: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:38.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-886" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:38.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers W1030 00:52:38.720905 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 00:52:38.721: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 00:52:38.723: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command Oct 30 00:52:38.738: INFO: Waiting up to 5m0s for pod "client-containers-c618631c-4518-47a0-be46-ebdfa7f6cd22" in namespace "containers-1152" to be "Succeeded or Failed" Oct 30 00:52:38.744: INFO: Pod "client-containers-c618631c-4518-47a0-be46-ebdfa7f6cd22": Phase="Pending", Reason="", readiness=false. Elapsed: 5.691475ms Oct 30 00:52:40.750: INFO: Pod "client-containers-c618631c-4518-47a0-be46-ebdfa7f6cd22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011331734s Oct 30 00:52:42.756: INFO: Pod "client-containers-c618631c-4518-47a0-be46-ebdfa7f6cd22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017552502s Oct 30 00:52:44.761: INFO: Pod "client-containers-c618631c-4518-47a0-be46-ebdfa7f6cd22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023023269s Oct 30 00:52:46.765: INFO: Pod "client-containers-c618631c-4518-47a0-be46-ebdfa7f6cd22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02663604s STEP: Saw pod success Oct 30 00:52:46.765: INFO: Pod "client-containers-c618631c-4518-47a0-be46-ebdfa7f6cd22" satisfied condition "Succeeded or Failed" Oct 30 00:52:46.768: INFO: Trying to get logs from node node2 pod client-containers-c618631c-4518-47a0-be46-ebdfa7f6cd22 container agnhost-container: STEP: delete the pod Oct 30 00:52:46.788: INFO: Waiting for pod client-containers-c618631c-4518-47a0-be46-ebdfa7f6cd22 to disappear Oct 30 00:52:46.790: INFO: Pod client-containers-c618631c-4518-47a0-be46-ebdfa7f6cd22 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:46.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1152" for this suite. • [SLOW TEST:8.097 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":25,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:38.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-aac14a86-3e2f-45f7-a0b6-c8001b3440af STEP: Creating a pod to test consume configMaps Oct 30 00:52:39.169: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-05eab7a9-1fc9-40b1-9057-1439b1894904" in namespace "projected-8523" to be "Succeeded or Failed" Oct 30 00:52:39.172: INFO: Pod "pod-projected-configmaps-05eab7a9-1fc9-40b1-9057-1439b1894904": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21782ms Oct 30 00:52:41.176: INFO: Pod "pod-projected-configmaps-05eab7a9-1fc9-40b1-9057-1439b1894904": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006276844s Oct 30 00:52:43.183: INFO: Pod "pod-projected-configmaps-05eab7a9-1fc9-40b1-9057-1439b1894904": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013977611s Oct 30 00:52:45.188: INFO: Pod "pod-projected-configmaps-05eab7a9-1fc9-40b1-9057-1439b1894904": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018168704s Oct 30 00:52:47.191: INFO: Pod "pod-projected-configmaps-05eab7a9-1fc9-40b1-9057-1439b1894904": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021719698s STEP: Saw pod success Oct 30 00:52:47.191: INFO: Pod "pod-projected-configmaps-05eab7a9-1fc9-40b1-9057-1439b1894904" satisfied condition "Succeeded or Failed" Oct 30 00:52:47.193: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-05eab7a9-1fc9-40b1-9057-1439b1894904 container agnhost-container: STEP: delete the pod Oct 30 00:52:47.218: INFO: Waiting for pod pod-projected-configmaps-05eab7a9-1fc9-40b1-9057-1439b1894904 to disappear Oct 30 00:52:47.220: INFO: Pod pod-projected-configmaps-05eab7a9-1fc9-40b1-9057-1439b1894904 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:47.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8523" for this suite. • [SLOW TEST:8.376 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":35,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:38.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container W1030 00:52:38.734301 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 00:52:38.734: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 00:52:38.736: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 30 00:52:38.740: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:47.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2615" for this suite. • [SLOW TEST:8.615 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:38.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W1030 00:52:38.766357 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 00:52:38.766: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 00:52:38.768: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod Oct 30 00:52:38.788: INFO: The status of Pod pod-hostip-80d67ff8-3f54-49a8-a897-59f177887206 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:40.793: INFO: The status of Pod pod-hostip-80d67ff8-3f54-49a8-a897-59f177887206 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:42.793: INFO: The status of Pod pod-hostip-80d67ff8-3f54-49a8-a897-59f177887206 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:44.795: INFO: The status of Pod pod-hostip-80d67ff8-3f54-49a8-a897-59f177887206 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:46.793: INFO: The status of Pod pod-hostip-80d67ff8-3f54-49a8-a897-59f177887206 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:48.792: INFO: The status of Pod pod-hostip-80d67ff8-3f54-49a8-a897-59f177887206 is Running (Ready = true) Oct 30 00:52:48.797: INFO: Pod pod-hostip-80d67ff8-3f54-49a8-a897-59f177887206 has hostIP: 10.10.190.208 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:48.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1456" for this suite. • [SLOW TEST:10.070 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:47.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:52:47.375: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1be371f-2b9b-4b3d-ad1d-d835ea76f0e8" in namespace "downward-api-3572" to be "Succeeded or Failed" Oct 30 00:52:47.378: INFO: Pod "downwardapi-volume-f1be371f-2b9b-4b3d-ad1d-d835ea76f0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.585986ms Oct 30 00:52:49.380: INFO: Pod "downwardapi-volume-f1be371f-2b9b-4b3d-ad1d-d835ea76f0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005207252s Oct 30 00:52:51.385: INFO: Pod "downwardapi-volume-f1be371f-2b9b-4b3d-ad1d-d835ea76f0e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009621907s STEP: Saw pod success Oct 30 00:52:51.385: INFO: Pod "downwardapi-volume-f1be371f-2b9b-4b3d-ad1d-d835ea76f0e8" satisfied condition "Succeeded or Failed" Oct 30 00:52:51.388: INFO: Trying to get logs from node node1 pod downwardapi-volume-f1be371f-2b9b-4b3d-ad1d-d835ea76f0e8 container client-container: STEP: delete the pod Oct 30 00:52:51.402: INFO: Waiting for pod downwardapi-volume-f1be371f-2b9b-4b3d-ad1d-d835ea76f0e8 to disappear Oct 30 00:52:51.404: INFO: Pod downwardapi-volume-f1be371f-2b9b-4b3d-ad1d-d835ea76f0e8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:51.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3572" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:38.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test W1030 00:52:38.725235 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 00:52:38.725: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 00:52:38.727: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:52:38.752: INFO: The status of Pod busybox-readonly-fsfc383066-36b2-4e0c-b990-55527d395781 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:40.756: INFO: The status of Pod busybox-readonly-fsfc383066-36b2-4e0c-b990-55527d395781 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:42.757: INFO: The status of Pod busybox-readonly-fsfc383066-36b2-4e0c-b990-55527d395781 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:44.756: INFO: The status of Pod busybox-readonly-fsfc383066-36b2-4e0c-b990-55527d395781 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:46.757: INFO: The status of Pod busybox-readonly-fsfc383066-36b2-4e0c-b990-55527d395781 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:48.756: INFO: The status of Pod busybox-readonly-fsfc383066-36b2-4e0c-b990-55527d395781 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:50.756: INFO: The status of Pod busybox-readonly-fsfc383066-36b2-4e0c-b990-55527d395781 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:52.755: INFO: The status of Pod busybox-readonly-fsfc383066-36b2-4e0c-b990-55527d395781 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:52.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2998" for this suite. • [SLOW TEST:14.065 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a read only busybox container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:38.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W1030 00:52:38.742585 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 00:52:38.742: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 00:52:38.745: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:52:38.766: INFO: Waiting up to 5m0s for pod "downwardapi-volume-200bdb31-5e1d-40fb-96f7-aa7bb0c68954" in namespace "projected-5185" to be "Succeeded or Failed" Oct 30 00:52:38.768: INFO: Pod "downwardapi-volume-200bdb31-5e1d-40fb-96f7-aa7bb0c68954": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107923ms Oct 30 00:52:40.774: INFO: Pod "downwardapi-volume-200bdb31-5e1d-40fb-96f7-aa7bb0c68954": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007171296s Oct 30 00:52:42.777: INFO: Pod "downwardapi-volume-200bdb31-5e1d-40fb-96f7-aa7bb0c68954": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010502717s Oct 30 00:52:44.781: INFO: Pod "downwardapi-volume-200bdb31-5e1d-40fb-96f7-aa7bb0c68954": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014493361s Oct 30 00:52:46.784: INFO: Pod "downwardapi-volume-200bdb31-5e1d-40fb-96f7-aa7bb0c68954": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018014144s Oct 30 00:52:48.788: INFO: Pod "downwardapi-volume-200bdb31-5e1d-40fb-96f7-aa7bb0c68954": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021311425s Oct 30 00:52:50.792: INFO: Pod "downwardapi-volume-200bdb31-5e1d-40fb-96f7-aa7bb0c68954": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025375013s Oct 30 00:52:52.795: INFO: Pod "downwardapi-volume-200bdb31-5e1d-40fb-96f7-aa7bb0c68954": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.028717045s STEP: Saw pod success Oct 30 00:52:52.795: INFO: Pod "downwardapi-volume-200bdb31-5e1d-40fb-96f7-aa7bb0c68954" satisfied condition "Succeeded or Failed" Oct 30 00:52:52.797: INFO: Trying to get logs from node node2 pod downwardapi-volume-200bdb31-5e1d-40fb-96f7-aa7bb0c68954 container client-container: STEP: delete the pod Oct 30 00:52:52.811: INFO: Waiting for pod downwardapi-volume-200bdb31-5e1d-40fb-96f7-aa7bb0c68954 to disappear Oct 30 00:52:52.813: INFO: Pod downwardapi-volume-200bdb31-5e1d-40fb-96f7-aa7bb0c68954 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:52.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5185" for this suite. • [SLOW TEST:14.104 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:46.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Oct 30 00:52:46.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4565 create -f -' Oct 30 00:52:47.168: INFO: stderr: "" Oct 30 00:52:47.168: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 30 00:52:48.173: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:52:48.173: INFO: Found 0 / 1 Oct 30 00:52:49.172: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:52:49.172: INFO: Found 0 / 1 Oct 30 00:52:50.173: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:52:50.174: INFO: Found 0 / 1 Oct 30 00:52:51.173: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:52:51.173: INFO: Found 0 / 1 Oct 30 00:52:52.172: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:52:52.172: INFO: Found 0 / 1 Oct 30 00:52:53.173: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:52:53.173: INFO: Found 0 / 1 Oct 30 00:52:54.171: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:52:54.172: INFO: Found 1 / 1 Oct 30 00:52:54.172: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Oct 30 00:52:54.174: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:52:54.174: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 30 00:52:54.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4565 patch pod agnhost-primary-gtcwv -p {"metadata":{"annotations":{"x":"y"}}}' Oct 30 00:52:54.350: INFO: stderr: "" Oct 30 00:52:54.350: INFO: stdout: "pod/agnhost-primary-gtcwv patched\n" STEP: checking annotations Oct 30 00:52:54.353: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:52:54.353: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:54.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4565" for this suite. • [SLOW TEST:7.532 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":2,"skipped":37,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:52.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-c0694944-63d9-4fbb-ae4a-278fd929d3d3 STEP: Creating a pod to test consume configMaps Oct 30 00:52:52.826: INFO: Waiting up to 5m0s for pod "pod-configmaps-f0edac1b-ea6d-4d8f-b9f0-eec0268219eb" in namespace "configmap-6201" to be "Succeeded or Failed" Oct 30 00:52:52.828: INFO: Pod "pod-configmaps-f0edac1b-ea6d-4d8f-b9f0-eec0268219eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005511ms Oct 30 00:52:54.831: INFO: Pod "pod-configmaps-f0edac1b-ea6d-4d8f-b9f0-eec0268219eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004649558s Oct 30 00:52:56.834: INFO: Pod "pod-configmaps-f0edac1b-ea6d-4d8f-b9f0-eec0268219eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007827822s STEP: Saw pod success Oct 30 00:52:56.834: INFO: Pod "pod-configmaps-f0edac1b-ea6d-4d8f-b9f0-eec0268219eb" satisfied condition "Succeeded or Failed" Oct 30 00:52:56.836: INFO: Trying to get logs from node node2 pod pod-configmaps-f0edac1b-ea6d-4d8f-b9f0-eec0268219eb container agnhost-container: STEP: delete the pod Oct 30 00:52:56.849: INFO: Waiting for pod pod-configmaps-f0edac1b-ea6d-4d8f-b9f0-eec0268219eb to disappear Oct 30 00:52:56.852: INFO: Pod pod-configmaps-f0edac1b-ea6d-4d8f-b9f0-eec0268219eb no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:56.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6201" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:48.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-393adee5-8605-4455-82c6-090ecdadd4f2 STEP: Creating a pod to test consume configMaps Oct 30 00:52:48.866: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b5ac8b2d-e51f-49e5-a41e-94cfc82f9689" in namespace "projected-2016" to be "Succeeded or Failed" Oct 30 00:52:48.870: INFO: Pod "pod-projected-configmaps-b5ac8b2d-e51f-49e5-a41e-94cfc82f9689": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101433ms Oct 30 00:52:50.873: INFO: Pod "pod-projected-configmaps-b5ac8b2d-e51f-49e5-a41e-94cfc82f9689": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007641429s Oct 30 00:52:52.876: INFO: Pod "pod-projected-configmaps-b5ac8b2d-e51f-49e5-a41e-94cfc82f9689": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010646847s Oct 30 00:52:54.880: INFO: Pod "pod-projected-configmaps-b5ac8b2d-e51f-49e5-a41e-94cfc82f9689": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014413956s Oct 30 00:52:56.883: INFO: Pod "pod-projected-configmaps-b5ac8b2d-e51f-49e5-a41e-94cfc82f9689": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016921999s STEP: Saw pod success Oct 30 00:52:56.883: INFO: Pod "pod-projected-configmaps-b5ac8b2d-e51f-49e5-a41e-94cfc82f9689" satisfied condition "Succeeded or Failed" Oct 30 00:52:56.885: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-b5ac8b2d-e51f-49e5-a41e-94cfc82f9689 container agnhost-container: STEP: delete the pod Oct 30 00:52:56.900: INFO: Waiting for pod pod-projected-configmaps-b5ac8b2d-e51f-49e5-a41e-94cfc82f9689 to disappear Oct 30 00:52:56.902: INFO: Pod pod-projected-configmaps-b5ac8b2d-e51f-49e5-a41e-94cfc82f9689 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:56.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2016" for this suite. • [SLOW TEST:8.081 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:52.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command Oct 30 00:52:52.900: INFO: Waiting up to 5m0s for pod "var-expansion-424b46d4-e2a8-47d3-953f-3462a19bdd85" in namespace "var-expansion-7766" to be "Succeeded or Failed" Oct 30 00:52:52.903: INFO: Pod "var-expansion-424b46d4-e2a8-47d3-953f-3462a19bdd85": Phase="Pending", Reason="", readiness=false. Elapsed: 3.033314ms Oct 30 00:52:54.908: INFO: Pod "var-expansion-424b46d4-e2a8-47d3-953f-3462a19bdd85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007668441s Oct 30 00:52:56.911: INFO: Pod "var-expansion-424b46d4-e2a8-47d3-953f-3462a19bdd85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010448019s STEP: Saw pod success Oct 30 00:52:56.911: INFO: Pod "var-expansion-424b46d4-e2a8-47d3-953f-3462a19bdd85" satisfied condition "Succeeded or Failed" Oct 30 00:52:56.913: INFO: Trying to get logs from node node1 pod var-expansion-424b46d4-e2a8-47d3-953f-3462a19bdd85 container dapi-container: STEP: delete the pod Oct 30 00:52:56.924: INFO: Waiting for pod var-expansion-424b46d4-e2a8-47d3-953f-3462a19bdd85 to disappear Oct 30 00:52:56.926: INFO: Pod var-expansion-424b46d4-e2a8-47d3-953f-3462a19bdd85 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:56.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7766" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:38.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook W1030 00:52:38.794408 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 00:52:38.794: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 00:52:38.796: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 00:52:39.150: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 00:52:41.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:52:43.163: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:52:45.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:52:47.163: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151959, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 00:52:50.168: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:52:50.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6461-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:58.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4666" for this suite. STEP: Destroying namespace "webhook-4666-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.521 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":1,"skipped":28,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:54.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-4536/configmap-test-ba0e08df-d1e1-4b21-b9de-b83e54eeec49 STEP: Creating a pod to test consume configMaps Oct 30 00:52:54.452: INFO: Waiting up to 5m0s for pod "pod-configmaps-3aaaf620-6219-4a84-a4f3-798e6b8b18ad" in namespace "configmap-4536" to be "Succeeded or Failed" Oct 30 00:52:54.457: INFO: Pod "pod-configmaps-3aaaf620-6219-4a84-a4f3-798e6b8b18ad": Phase="Pending", Reason="", readiness=false. Elapsed: 5.690042ms Oct 30 00:52:56.460: INFO: Pod "pod-configmaps-3aaaf620-6219-4a84-a4f3-798e6b8b18ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008727821s Oct 30 00:52:58.464: INFO: Pod "pod-configmaps-3aaaf620-6219-4a84-a4f3-798e6b8b18ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012119082s STEP: Saw pod success Oct 30 00:52:58.464: INFO: Pod "pod-configmaps-3aaaf620-6219-4a84-a4f3-798e6b8b18ad" satisfied condition "Succeeded or Failed" Oct 30 00:52:58.466: INFO: Trying to get logs from node node1 pod pod-configmaps-3aaaf620-6219-4a84-a4f3-798e6b8b18ad container env-test: STEP: delete the pod Oct 30 00:52:58.531: INFO: Waiting for pod pod-configmaps-3aaaf620-6219-4a84-a4f3-798e6b8b18ad to disappear Oct 30 00:52:58.534: INFO: Pod pod-configmaps-3aaaf620-6219-4a84-a4f3-798e6b8b18ad no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:52:58.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4536" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":56,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:56.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-7ce0f516-6888-4f2f-829a-e630c84b967f STEP: Creating a pod to test consume secrets Oct 30 00:52:57.010: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6cb593fb-a6c5-4fd3-b20f-e7a6a6bcf73c" in namespace "projected-8313" to be "Succeeded or Failed" Oct 30 00:52:57.012: INFO: Pod "pod-projected-secrets-6cb593fb-a6c5-4fd3-b20f-e7a6a6bcf73c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027412ms Oct 30 00:52:59.015: INFO: Pod "pod-projected-secrets-6cb593fb-a6c5-4fd3-b20f-e7a6a6bcf73c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005141837s Oct 30 00:53:01.018: INFO: Pod "pod-projected-secrets-6cb593fb-a6c5-4fd3-b20f-e7a6a6bcf73c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008614038s STEP: Saw pod success Oct 30 00:53:01.018: INFO: Pod "pod-projected-secrets-6cb593fb-a6c5-4fd3-b20f-e7a6a6bcf73c" satisfied condition "Succeeded or Failed" Oct 30 00:53:01.020: INFO: Trying to get logs from node node1 pod pod-projected-secrets-6cb593fb-a6c5-4fd3-b20f-e7a6a6bcf73c container projected-secret-volume-test: STEP: delete the pod Oct 30 00:53:01.031: INFO: Waiting for pod pod-projected-secrets-6cb593fb-a6c5-4fd3-b20f-e7a6a6bcf73c to disappear Oct 30 00:53:01.033: INFO: Pod pod-projected-secrets-6cb593fb-a6c5-4fd3-b20f-e7a6a6bcf73c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:01.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8313" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:38.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 30 00:52:38.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3149 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Oct 30 00:52:39.006: INFO: stderr: "" Oct 30 00:52:39.006: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Oct 30 00:52:54.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3149 get pod e2e-test-httpd-pod -o json' Oct 30 00:52:54.222: INFO: stderr: "" Oct 30 00:52:54.222: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.151\\\"\\n ],\\n \\\"mac\\\": \\\"66:31:4c:ed:1a:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.151\\\"\\n ],\\n \\\"mac\\\": \\\"66:31:4c:ed:1a:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2021-10-30T00:52:38Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3149\",\n \"resourceVersion\": \"58987\",\n \"uid\": \"ea844900-13ce-4d6c-9aa2-63e5ca378001\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-dq98w\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-dq98w\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-30T00:52:38Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-30T00:52:51Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-30T00:52:51Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-30T00:52:38Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://8d5af0cb35a90154f625e92077cf2d2773aff1f01e40d121955fcb8ca27afacc\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-10-30T00:52:51Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.208\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.4.151\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.4.151\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-10-30T00:52:38Z\"\n }\n}\n" STEP: replace the image in the pod Oct 30 00:52:54.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3149 replace -f -' Oct 30 00:52:54.550: INFO: stderr: "" Oct 30 00:52:54.550: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 Oct 30 00:52:54.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3149 delete pods e2e-test-httpd-pod' Oct 30 00:53:02.920: INFO: stderr: "" Oct 30 00:53:02.920: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:02.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3149" for this suite. • [SLOW TEST:24.160 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:56.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container Oct 30 00:53:03.432: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9236 pod-service-account-a5e8e7dc-5315-4d31-b111-884e85130a32 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Oct 30 00:53:03.893: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9236 pod-service-account-a5e8e7dc-5315-4d31-b111-884e85130a32 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Oct 30 00:53:04.142: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9236 pod-service-account-a5e8e7dc-5315-4d31-b111-884e85130a32 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:04.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9236" for this suite. • [SLOW TEST:7.513 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:47.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 30 00:52:47.749: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 30 00:52:49.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151967, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151967, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151967, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151967, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:52:51.760: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151967, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151967, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151967, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151967, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:52:53.760: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151967, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151967, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151967, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151967, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 00:52:56.770: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:52:56.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:04.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2160" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:17.586 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":3,"skipped":47,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:01.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:53:01.103: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:06.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7876" for this suite. • [SLOW TEST:5.559 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":4,"skipped":54,"failed":0} SSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:56.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Oct 30 00:52:56.930: INFO: namespace kubectl-6454 Oct 30 00:52:56.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6454 create -f -' Oct 30 00:52:57.318: INFO: stderr: "" Oct 30 00:52:57.318: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 30 00:52:58.321: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:52:58.321: INFO: Found 0 / 1 Oct 30 00:52:59.322: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:52:59.322: INFO: Found 0 / 1 Oct 30 00:53:00.321: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:53:00.322: INFO: Found 0 / 1 Oct 30 00:53:01.322: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:53:01.322: INFO: Found 0 / 1 Oct 30 00:53:02.321: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:53:02.321: INFO: Found 1 / 1 Oct 30 00:53:02.321: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 30 00:53:02.324: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 00:53:02.324: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 30 00:53:02.324: INFO: wait on agnhost-primary startup in kubectl-6454 Oct 30 00:53:02.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6454 logs agnhost-primary-dz58m agnhost-primary' Oct 30 00:53:02.476: INFO: stderr: "" Oct 30 00:53:02.476: INFO: stdout: "Paused\n" STEP: exposing RC Oct 30 00:53:02.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6454 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Oct 30 00:53:02.686: INFO: stderr: "" Oct 30 00:53:02.686: INFO: stdout: "service/rm2 exposed\n" Oct 30 00:53:02.688: INFO: Service rm2 in namespace kubectl-6454 found. STEP: exposing service Oct 30 00:53:04.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6454 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Oct 30 00:53:04.902: INFO: stderr: "" Oct 30 00:53:04.902: INFO: stdout: "service/rm3 exposed\n" Oct 30 00:53:04.904: INFO: Service rm3 in namespace kubectl-6454 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:06.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6454" for this suite. • [SLOW TEST:10.004 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:38.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test W1030 00:52:38.751440 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 00:52:38.751: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 00:52:38.754: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-3479 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 30 00:52:38.759: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 00:52:38.793: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:40.798: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:42.796: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:52:44.798: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:52:46.797: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:52:48.797: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:52:50.798: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:52:52.796: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:52:54.797: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:52:56.797: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:52:58.797: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:53:00.798: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 30 00:53:00.804: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 30 00:53:04.840: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 30 00:53:04.840: INFO: Going to poll 10.244.3.50 on port 8081 at least 0 times, with a maximum of 34 tries before failing Oct 30 00:53:04.842: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.50 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3479 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:53:04.842: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:53:05.928: INFO: Found all 1 expected endpoints: [netserver-0] Oct 30 00:53:05.928: INFO: Going to poll 10.244.4.154 on port 8081 at least 0 times, with a maximum of 34 tries before failing Oct 30 00:53:05.930: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.154 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3479 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:53:05.930: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:53:07.057: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:07.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3479" for this suite. • [SLOW TEST:28.343 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:58.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:14.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2063" for this suite. • [SLOW TEST:16.106 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":4,"skipped":60,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:04.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-d6691c41-df9c-486e-ad8b-535ccf67a0a3 STEP: Creating a pod to test consume configMaps Oct 30 00:53:04.436: INFO: Waiting up to 5m0s for pod "pod-configmaps-85bf245a-2638-4b66-9b52-56e56d6b7cb1" in namespace "configmap-4597" to be "Succeeded or Failed" Oct 30 00:53:04.439: INFO: Pod "pod-configmaps-85bf245a-2638-4b66-9b52-56e56d6b7cb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039608ms Oct 30 00:53:06.441: INFO: Pod "pod-configmaps-85bf245a-2638-4b66-9b52-56e56d6b7cb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004330726s Oct 30 00:53:08.444: INFO: Pod "pod-configmaps-85bf245a-2638-4b66-9b52-56e56d6b7cb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007078083s Oct 30 00:53:10.450: INFO: Pod "pod-configmaps-85bf245a-2638-4b66-9b52-56e56d6b7cb1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014002279s Oct 30 00:53:12.456: INFO: Pod "pod-configmaps-85bf245a-2638-4b66-9b52-56e56d6b7cb1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019018721s Oct 30 00:53:14.460: INFO: Pod "pod-configmaps-85bf245a-2638-4b66-9b52-56e56d6b7cb1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023110619s Oct 30 00:53:16.463: INFO: Pod "pod-configmaps-85bf245a-2638-4b66-9b52-56e56d6b7cb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.026431314s STEP: Saw pod success Oct 30 00:53:16.463: INFO: Pod "pod-configmaps-85bf245a-2638-4b66-9b52-56e56d6b7cb1" satisfied condition "Succeeded or Failed" Oct 30 00:53:16.465: INFO: Trying to get logs from node node1 pod pod-configmaps-85bf245a-2638-4b66-9b52-56e56d6b7cb1 container agnhost-container: STEP: delete the pod Oct 30 00:53:16.482: INFO: Waiting for pod pod-configmaps-85bf245a-2638-4b66-9b52-56e56d6b7cb1 to disappear Oct 30 00:53:16.483: INFO: Pod pod-configmaps-85bf245a-2638-4b66-9b52-56e56d6b7cb1 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:16.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4597" for this suite. • [SLOW TEST:12.089 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":32,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:16.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:16.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6799" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":5,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:03.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:53:03.066: INFO: Creating simple deployment test-new-deployment Oct 30 00:53:03.075: INFO: deployment "test-new-deployment" doesn't have the required revision set Oct 30 00:53:05.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:53:07.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:53:09.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:53:11.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:53:13.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:53:15.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151983, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 30 00:53:17.108: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-1364 29c72807-15d3-46ff-86a1-608488581f24 59962 3 2021-10-30 00:53:03 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-30 00:53:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 00:53:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004dddf58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-30 00:53:16 +0000 UTC,LastTransitionTime:2021-10-30 00:53:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-10-30 00:53:16 +0000 UTC,LastTransitionTime:2021-10-30 00:53:03 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 30 00:53:17.110: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-1364 1ca04dfb-f2ac-4e07-9eb6-ec21578f5d13 59964 3 2021-10-30 00:53:03 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 29c72807-15d3-46ff-86a1-608488581f24 0xc000f9e3c7 0xc000f9e3c8}] [] [{kube-controller-manager Update apps/v1 2021-10-30 00:53:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"29c72807-15d3-46ff-86a1-608488581f24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000f9e448 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 30 00:53:17.114: INFO: Pod "test-new-deployment-847dcfb7fb-dvjdr" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-dvjdr test-new-deployment-847dcfb7fb- deployment-1364 ba478812-7038-455f-96ce-07e233f05064 59926 0 2021-10-30 00:53:03 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.61" ], "mac": "3a:be:85:c6:84:67", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.61" ], "mac": "3a:be:85:c6:84:67", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 1ca04dfb-f2ac-4e07-9eb6-ec21578f5d13 0xc000f9e83f 0xc000f9e850}] [] [{kube-controller-manager Update v1 2021-10-30 00:53:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca04dfb-f2ac-4e07-9eb6-ec21578f5d13\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 00:53:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 00:53:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.61\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9fmjk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9fmjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.61,StartTime:2021-10-30 00:53:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 00:53:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://a569fecb0c178037b4bec4870a18c72bd3a1c30d7fd76e3af075f47212b44e55,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 00:53:17.114: INFO: Pod "test-new-deployment-847dcfb7fb-jxp2w" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-jxp2w test-new-deployment-847dcfb7fb- deployment-1364 f177ff8e-d26d-46ec-83c7-3896a4e32264 59968 0 2021-10-30 00:53:17 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 1ca04dfb-f2ac-4e07-9eb6-ec21578f5d13 0xc000f9ea3f 0xc000f9ea50}] [] [{kube-controller-manager Update v1 2021-10-30 00:53:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca04dfb-f2ac-4e07-9eb6-ec21578f5d13\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qh95q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qh95q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:17.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1364" for this suite. • [SLOW TEST:14.074 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":3,"skipped":77,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:17.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:17.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-7464" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":4,"skipped":87,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:38.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1030 00:52:39.104894 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 00:52:39.105: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 00:52:39.106: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-d39015ed-66af-4ca7-a98e-ee8461d123ef in namespace container-probe-7175 Oct 30 00:52:49.124: INFO: Started pod liveness-d39015ed-66af-4ca7-a98e-ee8461d123ef in namespace container-probe-7175 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 00:52:49.127: INFO: Initial restart count of pod liveness-d39015ed-66af-4ca7-a98e-ee8461d123ef is 0 Oct 30 00:53:17.248: INFO: Restart count of pod container-probe-7175/liveness-d39015ed-66af-4ca7-a98e-ee8461d123ef is now 1 (28.120735701s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:17.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7175" for this suite. • [SLOW TEST:38.455 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":44,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:51.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-dfm9 STEP: Creating a pod to test atomic-volume-subpath Oct 30 00:52:51.480: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dfm9" in namespace "subpath-9866" to be "Succeeded or Failed" Oct 30 00:52:51.482: INFO: Pod "pod-subpath-test-secret-dfm9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156858ms Oct 30 00:52:53.485: INFO: Pod "pod-subpath-test-secret-dfm9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005068104s Oct 30 00:52:55.488: INFO: Pod "pod-subpath-test-secret-dfm9": Phase="Running", Reason="", readiness=true. Elapsed: 4.0087582s Oct 30 00:52:57.491: INFO: Pod "pod-subpath-test-secret-dfm9": Phase="Running", Reason="", readiness=true. Elapsed: 6.011806113s Oct 30 00:52:59.494: INFO: Pod "pod-subpath-test-secret-dfm9": Phase="Running", Reason="", readiness=true. Elapsed: 8.014398694s Oct 30 00:53:01.500: INFO: Pod "pod-subpath-test-secret-dfm9": Phase="Running", Reason="", readiness=true. Elapsed: 10.020714869s Oct 30 00:53:03.502: INFO: Pod "pod-subpath-test-secret-dfm9": Phase="Running", Reason="", readiness=true. Elapsed: 12.022909671s Oct 30 00:53:05.506: INFO: Pod "pod-subpath-test-secret-dfm9": Phase="Running", Reason="", readiness=true. Elapsed: 14.026665435s Oct 30 00:53:07.509: INFO: Pod "pod-subpath-test-secret-dfm9": Phase="Running", Reason="", readiness=true. Elapsed: 16.02894359s Oct 30 00:53:09.513: INFO: Pod "pod-subpath-test-secret-dfm9": Phase="Running", Reason="", readiness=true. Elapsed: 18.032958715s Oct 30 00:53:11.521: INFO: Pod "pod-subpath-test-secret-dfm9": Phase="Running", Reason="", readiness=true. Elapsed: 20.041451276s Oct 30 00:53:13.525: INFO: Pod "pod-subpath-test-secret-dfm9": Phase="Running", Reason="", readiness=true. Elapsed: 22.045267619s Oct 30 00:53:15.530: INFO: Pod "pod-subpath-test-secret-dfm9": Phase="Running", Reason="", readiness=true. Elapsed: 24.050202876s Oct 30 00:53:17.534: INFO: Pod "pod-subpath-test-secret-dfm9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.054300279s STEP: Saw pod success Oct 30 00:53:17.534: INFO: Pod "pod-subpath-test-secret-dfm9" satisfied condition "Succeeded or Failed" Oct 30 00:53:17.536: INFO: Trying to get logs from node node1 pod pod-subpath-test-secret-dfm9 container test-container-subpath-secret-dfm9: STEP: delete the pod Oct 30 00:53:17.549: INFO: Waiting for pod pod-subpath-test-secret-dfm9 to disappear Oct 30 00:53:17.551: INFO: Pod pod-subpath-test-secret-dfm9 no longer exists STEP: Deleting pod pod-subpath-test-secret-dfm9 Oct 30 00:53:17.551: INFO: Deleting pod "pod-subpath-test-secret-dfm9" in namespace "subpath-9866" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:17.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9866" for this suite. • [SLOW TEST:26.124 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":26,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:06.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:17.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4146" for this suite. • [SLOW TEST:11.066 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":5,"skipped":59,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:04.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 00:53:05.567: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 00:53:07.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:53:09.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:53:11.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:53:13.578: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:53:15.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:53:17.578: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771151985, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 00:53:20.589: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:20.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2623" for this suite. STEP: Destroying namespace "webhook-2623-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.746 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":4,"skipped":70,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:14.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-72910dd8-018a-4b5c-a3a6-ae7df97384a7 STEP: Creating a pod to test consume secrets Oct 30 00:53:14.716: INFO: Waiting up to 5m0s for pod "pod-secrets-9c2fb8ce-a7eb-4fd5-a9f8-92bd1dac1294" in namespace "secrets-4979" to be "Succeeded or Failed" Oct 30 00:53:14.719: INFO: Pod "pod-secrets-9c2fb8ce-a7eb-4fd5-a9f8-92bd1dac1294": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195764ms Oct 30 00:53:16.721: INFO: Pod "pod-secrets-9c2fb8ce-a7eb-4fd5-a9f8-92bd1dac1294": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004417915s Oct 30 00:53:18.725: INFO: Pod "pod-secrets-9c2fb8ce-a7eb-4fd5-a9f8-92bd1dac1294": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008734542s Oct 30 00:53:20.729: INFO: Pod "pod-secrets-9c2fb8ce-a7eb-4fd5-a9f8-92bd1dac1294": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012600842s STEP: Saw pod success Oct 30 00:53:20.729: INFO: Pod "pod-secrets-9c2fb8ce-a7eb-4fd5-a9f8-92bd1dac1294" satisfied condition "Succeeded or Failed" Oct 30 00:53:20.731: INFO: Trying to get logs from node node2 pod pod-secrets-9c2fb8ce-a7eb-4fd5-a9f8-92bd1dac1294 container secret-volume-test: STEP: delete the pod Oct 30 00:53:20.748: INFO: Waiting for pod pod-secrets-9c2fb8ce-a7eb-4fd5-a9f8-92bd1dac1294 to disappear Oct 30 00:53:20.751: INFO: Pod pod-secrets-9c2fb8ce-a7eb-4fd5-a9f8-92bd1dac1294 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:20.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4979" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:16.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 30 00:53:16.651: INFO: Waiting up to 5m0s for pod "pod-15ff2363-2f9e-4d85-980a-f3305cf83198" in namespace "emptydir-8070" to be "Succeeded or Failed" Oct 30 00:53:16.653: INFO: Pod "pod-15ff2363-2f9e-4d85-980a-f3305cf83198": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148283ms Oct 30 00:53:18.658: INFO: Pod "pod-15ff2363-2f9e-4d85-980a-f3305cf83198": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006821406s Oct 30 00:53:20.661: INFO: Pod "pod-15ff2363-2f9e-4d85-980a-f3305cf83198": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010341822s Oct 30 00:53:22.664: INFO: Pod "pod-15ff2363-2f9e-4d85-980a-f3305cf83198": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012945243s Oct 30 00:53:24.671: INFO: Pod "pod-15ff2363-2f9e-4d85-980a-f3305cf83198": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020313542s Oct 30 00:53:26.675: INFO: Pod "pod-15ff2363-2f9e-4d85-980a-f3305cf83198": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024111721s STEP: Saw pod success Oct 30 00:53:26.675: INFO: Pod "pod-15ff2363-2f9e-4d85-980a-f3305cf83198" satisfied condition "Succeeded or Failed" Oct 30 00:53:26.677: INFO: Trying to get logs from node node1 pod pod-15ff2363-2f9e-4d85-980a-f3305cf83198 container test-container: STEP: delete the pod Oct 30 00:53:26.691: INFO: Waiting for pod pod-15ff2363-2f9e-4d85-980a-f3305cf83198 to disappear Oct 30 00:53:26.693: INFO: Pod pod-15ff2363-2f9e-4d85-980a-f3305cf83198 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:26.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8070" for this suite. • [SLOW TEST:10.082 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":59,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:26.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:26.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2810" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":7,"skipped":73,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:17.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-f8bf9e5a-52c0-48c2-ae8b-c49192d298b2 STEP: Creating a pod to test consume secrets Oct 30 00:53:17.605: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6cd3b593-08d3-4b12-ae1f-b8b16679ffe9" in namespace "projected-4905" to be "Succeeded or Failed" Oct 30 00:53:17.607: INFO: Pod "pod-projected-secrets-6cd3b593-08d3-4b12-ae1f-b8b16679ffe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.558385ms Oct 30 00:53:19.610: INFO: Pod "pod-projected-secrets-6cd3b593-08d3-4b12-ae1f-b8b16679ffe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004831504s Oct 30 00:53:21.618: INFO: Pod "pod-projected-secrets-6cd3b593-08d3-4b12-ae1f-b8b16679ffe9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0135518s Oct 30 00:53:23.622: INFO: Pod "pod-projected-secrets-6cd3b593-08d3-4b12-ae1f-b8b16679ffe9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017011733s Oct 30 00:53:25.626: INFO: Pod "pod-projected-secrets-6cd3b593-08d3-4b12-ae1f-b8b16679ffe9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020937163s Oct 30 00:53:27.629: INFO: Pod "pod-projected-secrets-6cd3b593-08d3-4b12-ae1f-b8b16679ffe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024129896s STEP: Saw pod success Oct 30 00:53:27.629: INFO: Pod "pod-projected-secrets-6cd3b593-08d3-4b12-ae1f-b8b16679ffe9" satisfied condition "Succeeded or Failed" Oct 30 00:53:27.631: INFO: Trying to get logs from node node1 pod pod-projected-secrets-6cd3b593-08d3-4b12-ae1f-b8b16679ffe9 container projected-secret-volume-test: STEP: delete the pod Oct 30 00:53:27.668: INFO: Waiting for pod pod-projected-secrets-6cd3b593-08d3-4b12-ae1f-b8b16679ffe9 to disappear Oct 30 00:53:27.671: INFO: Pod pod-projected-secrets-6cd3b593-08d3-4b12-ae1f-b8b16679ffe9 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:27.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4905" for this suite. • [SLOW TEST:10.112 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":28,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:17.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Oct 30 00:53:17.231: INFO: The status of Pod pod-update-747d86dc-e3c2-4f66-a13e-701e19dc64b0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:19.234: INFO: The status of Pod pod-update-747d86dc-e3c2-4f66-a13e-701e19dc64b0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:21.237: INFO: The status of Pod pod-update-747d86dc-e3c2-4f66-a13e-701e19dc64b0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:23.236: INFO: The status of Pod pod-update-747d86dc-e3c2-4f66-a13e-701e19dc64b0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:25.235: INFO: The status of Pod pod-update-747d86dc-e3c2-4f66-a13e-701e19dc64b0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:27.235: INFO: The status of Pod pod-update-747d86dc-e3c2-4f66-a13e-701e19dc64b0 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 30 00:53:27.748: INFO: Successfully updated pod "pod-update-747d86dc-e3c2-4f66-a13e-701e19dc64b0" STEP: verifying the updated pod is in kubernetes Oct 30 00:53:27.753: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:27.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5757" for this suite. • [SLOW TEST:10.559 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":96,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:52:58.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8717.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8717.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8717.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8717.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 00:53:14.359: INFO: DNS probes using dns-test-8d020bd8-8f16-4f4f-99b3-746736cd7bbd succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8717.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8717.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8717.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8717.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 00:53:22.399: INFO: DNS probes using dns-test-2a5cfe6d-ee0b-464c-91ec-89ead074a232 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8717.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8717.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8717.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8717.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 00:53:28.443: INFO: DNS probes using dns-test-34d13bfe-fb44-43a2-9b7e-6124bd9687c9 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:28.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8717" for this suite. • [SLOW TEST:30.162 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":2,"skipped":31,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:20.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:53:20.796: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Oct 30 00:53:25.800: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 30 00:53:25.801: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 30 00:53:29.824: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1427 47b8225b-52f3-4976-960a-a52c6447ed1f 60436 1 2021-10-30 00:53:25 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-30 00:53:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 00:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00507a338 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-30 00:53:25 +0000 UTC,LastTransitionTime:2021-10-30 00:53:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5b4d99b59b" has successfully progressed.,LastUpdateTime:2021-10-30 00:53:29 +0000 UTC,LastTransitionTime:2021-10-30 00:53:25 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 30 00:53:29.827: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-1427 f0686229-bdb2-4ff0-8153-74c737148878 60426 1 2021-10-30 00:53:25 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 47b8225b-52f3-4976-960a-a52c6447ed1f 0xc00507a6d7 0xc00507a6d8}] [] [{kube-controller-manager Update apps/v1 2021-10-30 00:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47b8225b-52f3-4976-960a-a52c6447ed1f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00507a768 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 30 00:53:29.830: INFO: Pod "test-cleanup-deployment-5b4d99b59b-gk25h" is available: &Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-gk25h test-cleanup-deployment-5b4d99b59b- deployment-1427 a929c04b-23a0-44cf-aafe-120a9b509f2b 60425 0 2021-10-30 00:53:25 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.172" ], "mac": "d2:05:af:c2:b5:53", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.172" ], "mac": "d2:05:af:c2:b5:53", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b f0686229-bdb2-4ff0-8153-74c737148878 0xc00507aabf 0xc00507aad0}] [] [{kube-controller-manager Update v1 2021-10-30 00:53:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f0686229-bdb2-4ff0-8153-74c737148878\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 00:53:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 00:53:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.172\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gm8cd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gm8cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.172,StartTime:2021-10-30 00:53:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 00:53:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://696e9fb73e08eafdf1ce768ac500373dea0ac6985ccfa728a4ab5aec29ce95ed,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.172,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:29.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1427" for this suite. • [SLOW TEST:9.068 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":6,"skipped":70,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:29.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy Oct 30 00:53:29.895: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2699 proxy --unix-socket=/tmp/kubectl-proxy-unix658599280/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:29.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2699" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":7,"skipped":86,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:07.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 30 00:53:07.121: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:09.124: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:11.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:13.125: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:15.126: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 30 00:53:15.141: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:17.144: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:19.145: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:21.144: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:23.144: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:25.146: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 30 00:53:25.158: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 30 00:53:25.161: INFO: Pod pod-with-poststart-http-hook still exists Oct 30 00:53:27.162: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 30 00:53:27.164: INFO: Pod pod-with-poststart-http-hook still exists Oct 30 00:53:29.162: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 30 00:53:29.165: INFO: Pod pod-with-poststart-http-hook still exists Oct 30 00:53:31.162: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 30 00:53:31.165: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:31.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-29" for this suite. • [SLOW TEST:24.083 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:20.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:31.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5825" for this suite. • [SLOW TEST:11.054 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":5,"skipped":159,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:17.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:53:17.776: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 30 00:53:22.780: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Oct 30 00:53:26.808: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet Oct 30 00:53:26.818: INFO: observed ReplicaSet test-rs in namespace replicaset-9201 with ReadyReplicas 1, AvailableReplicas 1 Oct 30 00:53:26.832: INFO: observed ReplicaSet test-rs in namespace replicaset-9201 with ReadyReplicas 1, AvailableReplicas 1 Oct 30 00:53:26.846: INFO: observed ReplicaSet test-rs in namespace replicaset-9201 with ReadyReplicas 1, AvailableReplicas 1 Oct 30 00:53:26.849: INFO: observed ReplicaSet test-rs in namespace replicaset-9201 with ReadyReplicas 1, AvailableReplicas 1 Oct 30 00:53:31.007: INFO: observed ReplicaSet test-rs in namespace replicaset-9201 with ReadyReplicas 2, AvailableReplicas 2 Oct 30 00:53:32.006: INFO: observed Replicaset test-rs in namespace replicaset-9201 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:32.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9201" for this suite. • [SLOW TEST:14.275 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":6,"skipped":65,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:27.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-330e6d53-e2b2-44fb-a845-caa613a99347 STEP: Creating secret with name secret-projected-all-test-volume-5e78cd4e-7622-4271-89fe-fb00659fe49f STEP: Creating a pod to test Check all projections for projected volume plugin Oct 30 00:53:27.072: INFO: Waiting up to 5m0s for pod "projected-volume-df7592fb-887e-4fec-bed4-df855e1e50b4" in namespace "projected-9116" to be "Succeeded or Failed" Oct 30 00:53:27.077: INFO: Pod "projected-volume-df7592fb-887e-4fec-bed4-df855e1e50b4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.095705ms Oct 30 00:53:29.080: INFO: Pod "projected-volume-df7592fb-887e-4fec-bed4-df855e1e50b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00765668s Oct 30 00:53:31.084: INFO: Pod "projected-volume-df7592fb-887e-4fec-bed4-df855e1e50b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011424028s Oct 30 00:53:33.089: INFO: Pod "projected-volume-df7592fb-887e-4fec-bed4-df855e1e50b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016979206s STEP: Saw pod success Oct 30 00:53:33.089: INFO: Pod "projected-volume-df7592fb-887e-4fec-bed4-df855e1e50b4" satisfied condition "Succeeded or Failed" Oct 30 00:53:33.091: INFO: Trying to get logs from node node1 pod projected-volume-df7592fb-887e-4fec-bed4-df855e1e50b4 container projected-all-volume-test: STEP: delete the pod Oct 30 00:53:33.903: INFO: Waiting for pod projected-volume-df7592fb-887e-4fec-bed4-df855e1e50b4 to disappear Oct 30 00:53:33.906: INFO: Pod projected-volume-df7592fb-887e-4fec-bed4-df855e1e50b4 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:33.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9116" for this suite. • [SLOW TEST:6.884 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":180,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:27.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container Oct 30 00:53:33.892: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-571 PodName:pod-sharedvolume-43d22547-c1b4-46d9-b7c0-c085cd141bc7 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:53:33.892: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:53:34.170: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:34.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-571" for this suite. • [SLOW TEST:6.325 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":5,"skipped":116,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:17.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 30 00:53:17.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2450 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' Oct 30 00:53:17.525: INFO: stderr: "" Oct 30 00:53:17.525: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 Oct 30 00:53:17.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2450 delete pods e2e-test-httpd-pod' Oct 30 00:53:34.358: INFO: stderr: "" Oct 30 00:53:34.358: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:34.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2450" for this suite. • [SLOW TEST:17.018 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":2,"skipped":85,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:34.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 30 00:53:34.413: INFO: starting watch STEP: patching STEP: updating Oct 30 00:53:34.420: INFO: waiting for watch events with expected annotations Oct 30 00:53:34.420: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:34.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-502" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":3,"skipped":90,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:30.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:53:30.198: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f98f2ff-5b04-4080-b344-304083c21b7e" in namespace "projected-9380" to be "Succeeded or Failed" Oct 30 00:53:30.200: INFO: Pod "downwardapi-volume-5f98f2ff-5b04-4080-b344-304083c21b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.682683ms Oct 30 00:53:32.205: INFO: Pod "downwardapi-volume-5f98f2ff-5b04-4080-b344-304083c21b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00695326s Oct 30 00:53:34.207: INFO: Pod "downwardapi-volume-5f98f2ff-5b04-4080-b344-304083c21b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009721627s Oct 30 00:53:36.210: INFO: Pod "downwardapi-volume-5f98f2ff-5b04-4080-b344-304083c21b7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012505074s STEP: Saw pod success Oct 30 00:53:36.210: INFO: Pod "downwardapi-volume-5f98f2ff-5b04-4080-b344-304083c21b7e" satisfied condition "Succeeded or Failed" Oct 30 00:53:36.212: INFO: Trying to get logs from node node2 pod downwardapi-volume-5f98f2ff-5b04-4080-b344-304083c21b7e container client-container: STEP: delete the pod Oct 30 00:53:36.224: INFO: Waiting for pod downwardapi-volume-5f98f2ff-5b04-4080-b344-304083c21b7e to disappear Oct 30 00:53:36.226: INFO: Pod downwardapi-volume-5f98f2ff-5b04-4080-b344-304083c21b7e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:36.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9380" for this suite. • [SLOW TEST:6.168 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:32.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-7c9ba0eb-f31e-429f-ba4c-b0cd488913c9 STEP: Creating a pod to test consume configMaps Oct 30 00:53:32.053: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3cf3f15b-6de3-48a5-bf63-0ca29cf163fa" in namespace "projected-103" to be "Succeeded or Failed" Oct 30 00:53:32.055: INFO: Pod "pod-projected-configmaps-3cf3f15b-6de3-48a5-bf63-0ca29cf163fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.593167ms Oct 30 00:53:34.059: INFO: Pod "pod-projected-configmaps-3cf3f15b-6de3-48a5-bf63-0ca29cf163fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00652383s Oct 30 00:53:36.063: INFO: Pod "pod-projected-configmaps-3cf3f15b-6de3-48a5-bf63-0ca29cf163fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009817469s Oct 30 00:53:38.066: INFO: Pod "pod-projected-configmaps-3cf3f15b-6de3-48a5-bf63-0ca29cf163fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013528589s STEP: Saw pod success Oct 30 00:53:38.066: INFO: Pod "pod-projected-configmaps-3cf3f15b-6de3-48a5-bf63-0ca29cf163fa" satisfied condition "Succeeded or Failed" Oct 30 00:53:38.069: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-3cf3f15b-6de3-48a5-bf63-0ca29cf163fa container agnhost-container: STEP: delete the pod Oct 30 00:53:38.082: INFO: Waiting for pod pod-projected-configmaps-3cf3f15b-6de3-48a5-bf63-0ca29cf163fa to disappear Oct 30 00:53:38.084: INFO: Pod pod-projected-configmaps-3cf3f15b-6de3-48a5-bf63-0ca29cf163fa no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:38.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-103" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:31.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:42.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4032" for this suite. • [SLOW TEST:11.101 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":3,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:42.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1030 00:53:42.434264 22 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching Oct 30 00:53:42.443: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 30 00:53:42.445: INFO: starting watch STEP: patching STEP: updating Oct 30 00:53:42.459: INFO: waiting for watch events with expected annotations Oct 30 00:53:42.459: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:42.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-2468" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":4,"skipped":94,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:28.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Oct 30 00:53:28.521: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:42.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4425" for this suite. • [SLOW TEST:14.420 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":44,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:34.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 30 00:53:34.494: INFO: The status of Pod annotationupdate648ba689-3ff5-40f4-80b6-d095b9d18c63 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:36.498: INFO: The status of Pod annotationupdate648ba689-3ff5-40f4-80b6-d095b9d18c63 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:38.498: INFO: The status of Pod annotationupdate648ba689-3ff5-40f4-80b6-d095b9d18c63 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:40.499: INFO: The status of Pod annotationupdate648ba689-3ff5-40f4-80b6-d095b9d18c63 is Running (Ready = true) Oct 30 00:53:41.019: INFO: Successfully updated pod "annotationupdate648ba689-3ff5-40f4-80b6-d095b9d18c63" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:43.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-744" for this suite. • [SLOW TEST:8.581 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":98,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:43.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:43.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9293" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":5,"skipped":129,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:31.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 30 00:53:43.973: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:43.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6427" for this suite. • [SLOW TEST:12.098 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":166,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:33.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:53:33.979: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46afdffc-2d0c-451c-8a98-1595bb2b0bf1" in namespace "projected-1295" to be "Succeeded or Failed" Oct 30 00:53:33.985: INFO: Pod "downwardapi-volume-46afdffc-2d0c-451c-8a98-1595bb2b0bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.110062ms Oct 30 00:53:35.988: INFO: Pod "downwardapi-volume-46afdffc-2d0c-451c-8a98-1595bb2b0bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009038245s Oct 30 00:53:37.996: INFO: Pod "downwardapi-volume-46afdffc-2d0c-451c-8a98-1595bb2b0bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016340507s Oct 30 00:53:40.005: INFO: Pod "downwardapi-volume-46afdffc-2d0c-451c-8a98-1595bb2b0bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025453685s Oct 30 00:53:42.008: INFO: Pod "downwardapi-volume-46afdffc-2d0c-451c-8a98-1595bb2b0bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.028401782s Oct 30 00:53:44.013: INFO: Pod "downwardapi-volume-46afdffc-2d0c-451c-8a98-1595bb2b0bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.033569008s Oct 30 00:53:46.016: INFO: Pod "downwardapi-volume-46afdffc-2d0c-451c-8a98-1595bb2b0bf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.036882611s STEP: Saw pod success Oct 30 00:53:46.016: INFO: Pod "downwardapi-volume-46afdffc-2d0c-451c-8a98-1595bb2b0bf1" satisfied condition "Succeeded or Failed" Oct 30 00:53:46.019: INFO: Trying to get logs from node node1 pod downwardapi-volume-46afdffc-2d0c-451c-8a98-1595bb2b0bf1 container client-container: STEP: delete the pod Oct 30 00:53:46.034: INFO: Waiting for pod downwardapi-volume-46afdffc-2d0c-451c-8a98-1595bb2b0bf1 to disappear Oct 30 00:53:46.035: INFO: Pod downwardapi-volume-46afdffc-2d0c-451c-8a98-1595bb2b0bf1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:46.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1295" for this suite. • [SLOW TEST:12.099 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":194,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:36.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-fc15aecf-3764-4992-b1c3-71161a499cc8 STEP: Creating a pod to test consume configMaps Oct 30 00:53:36.300: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f1170864-b0fa-4d02-95f9-1a3916f50a46" in namespace "projected-7222" to be "Succeeded or Failed" Oct 30 00:53:36.302: INFO: Pod "pod-projected-configmaps-f1170864-b0fa-4d02-95f9-1a3916f50a46": Phase="Pending", Reason="", readiness=false. Elapsed: 1.995164ms Oct 30 00:53:38.305: INFO: Pod "pod-projected-configmaps-f1170864-b0fa-4d02-95f9-1a3916f50a46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005230313s Oct 30 00:53:40.310: INFO: Pod "pod-projected-configmaps-f1170864-b0fa-4d02-95f9-1a3916f50a46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010137994s Oct 30 00:53:42.314: INFO: Pod "pod-projected-configmaps-f1170864-b0fa-4d02-95f9-1a3916f50a46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013944647s Oct 30 00:53:44.318: INFO: Pod "pod-projected-configmaps-f1170864-b0fa-4d02-95f9-1a3916f50a46": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017865257s Oct 30 00:53:46.322: INFO: Pod "pod-projected-configmaps-f1170864-b0fa-4d02-95f9-1a3916f50a46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022197777s STEP: Saw pod success Oct 30 00:53:46.322: INFO: Pod "pod-projected-configmaps-f1170864-b0fa-4d02-95f9-1a3916f50a46" satisfied condition "Succeeded or Failed" Oct 30 00:53:46.326: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-f1170864-b0fa-4d02-95f9-1a3916f50a46 container agnhost-container: STEP: delete the pod Oct 30 00:53:46.342: INFO: Waiting for pod pod-projected-configmaps-f1170864-b0fa-4d02-95f9-1a3916f50a46 to disappear Oct 30 00:53:46.344: INFO: Pod pod-projected-configmaps-f1170864-b0fa-4d02-95f9-1a3916f50a46 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:46.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7222" for this suite. • [SLOW TEST:10.086 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":141,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:46.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:46.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4819" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":10,"skipped":148,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:42.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:53:42.604: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4922b3a9-0d67-4b6d-aba9-cadfc9848a1a" in namespace "downward-api-6032" to be "Succeeded or Failed" Oct 30 00:53:42.607: INFO: Pod "downwardapi-volume-4922b3a9-0d67-4b6d-aba9-cadfc9848a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.43452ms Oct 30 00:53:44.610: INFO: Pod "downwardapi-volume-4922b3a9-0d67-4b6d-aba9-cadfc9848a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005932499s Oct 30 00:53:46.614: INFO: Pod "downwardapi-volume-4922b3a9-0d67-4b6d-aba9-cadfc9848a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009906544s Oct 30 00:53:48.618: INFO: Pod "downwardapi-volume-4922b3a9-0d67-4b6d-aba9-cadfc9848a1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013773961s STEP: Saw pod success Oct 30 00:53:48.618: INFO: Pod "downwardapi-volume-4922b3a9-0d67-4b6d-aba9-cadfc9848a1a" satisfied condition "Succeeded or Failed" Oct 30 00:53:48.620: INFO: Trying to get logs from node node2 pod downwardapi-volume-4922b3a9-0d67-4b6d-aba9-cadfc9848a1a container client-container: STEP: delete the pod Oct 30 00:53:48.635: INFO: Waiting for pod downwardapi-volume-4922b3a9-0d67-4b6d-aba9-cadfc9848a1a to disappear Oct 30 00:53:48.636: INFO: Pod downwardapi-volume-4922b3a9-0d67-4b6d-aba9-cadfc9848a1a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:48.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6032" for this suite. • [SLOW TEST:6.073 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":137,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:34.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 00:53:34.661: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 00:53:36.670: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:53:38.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:53:40.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:53:42.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:53:44.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152014, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 00:53:47.681: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:48.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9107" for this suite. STEP: Destroying namespace "webhook-9107-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.601 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":6,"skipped":135,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:42.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:53:42.955: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac7fb923-f7e4-4d4e-9506-c9d4dd46d618" in namespace "projected-3592" to be "Succeeded or Failed" Oct 30 00:53:42.959: INFO: Pod "downwardapi-volume-ac7fb923-f7e4-4d4e-9506-c9d4dd46d618": Phase="Pending", Reason="", readiness=false. Elapsed: 4.425835ms Oct 30 00:53:44.964: INFO: Pod "downwardapi-volume-ac7fb923-f7e4-4d4e-9506-c9d4dd46d618": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009162029s Oct 30 00:53:46.968: INFO: Pod "downwardapi-volume-ac7fb923-f7e4-4d4e-9506-c9d4dd46d618": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013497925s Oct 30 00:53:48.973: INFO: Pod "downwardapi-volume-ac7fb923-f7e4-4d4e-9506-c9d4dd46d618": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018214369s STEP: Saw pod success Oct 30 00:53:48.973: INFO: Pod "downwardapi-volume-ac7fb923-f7e4-4d4e-9506-c9d4dd46d618" satisfied condition "Succeeded or Failed" Oct 30 00:53:48.976: INFO: Trying to get logs from node node2 pod downwardapi-volume-ac7fb923-f7e4-4d4e-9506-c9d4dd46d618 container client-container: STEP: delete the pod Oct 30 00:53:48.988: INFO: Waiting for pod downwardapi-volume-ac7fb923-f7e4-4d4e-9506-c9d4dd46d618 to disappear Oct 30 00:53:48.990: INFO: Pod downwardapi-volume-ac7fb923-f7e4-4d4e-9506-c9d4dd46d618 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:48.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3592" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":46,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":66,"failed":0} [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:38.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:49.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3748" for this suite. • [SLOW TEST:11.267 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":8,"skipped":66,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:46.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-d24de162-75cb-4d5b-baf9-26673e9d7e6b STEP: Creating a pod to test consume configMaps Oct 30 00:53:46.083: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8f3093bb-edf3-4876-95ce-36052908ed34" in namespace "projected-1172" to be "Succeeded or Failed" Oct 30 00:53:46.086: INFO: Pod "pod-projected-configmaps-8f3093bb-edf3-4876-95ce-36052908ed34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382035ms Oct 30 00:53:48.089: INFO: Pod "pod-projected-configmaps-8f3093bb-edf3-4876-95ce-36052908ed34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005251381s Oct 30 00:53:50.093: INFO: Pod "pod-projected-configmaps-8f3093bb-edf3-4876-95ce-36052908ed34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009395638s Oct 30 00:53:52.096: INFO: Pod "pod-projected-configmaps-8f3093bb-edf3-4876-95ce-36052908ed34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012464309s STEP: Saw pod success Oct 30 00:53:52.096: INFO: Pod "pod-projected-configmaps-8f3093bb-edf3-4876-95ce-36052908ed34" satisfied condition "Succeeded or Failed" Oct 30 00:53:52.098: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-8f3093bb-edf3-4876-95ce-36052908ed34 container agnhost-container: STEP: delete the pod Oct 30 00:53:52.111: INFO: Waiting for pod pod-projected-configmaps-8f3093bb-edf3-4876-95ce-36052908ed34 to disappear Oct 30 00:53:52.112: INFO: Pod pod-projected-configmaps-8f3093bb-edf3-4876-95ce-36052908ed34 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:52.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1172" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":195,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:48.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:54.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1596" for this suite. • [SLOW TEST:5.606 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":6,"skipped":199,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:49.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 30 00:53:49.074: INFO: Waiting up to 5m0s for pod "security-context-fb9147bb-ce8c-48a4-ac71-54e016b4b7f0" in namespace "security-context-3774" to be "Succeeded or Failed" Oct 30 00:53:49.076: INFO: Pod "security-context-fb9147bb-ce8c-48a4-ac71-54e016b4b7f0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.899978ms Oct 30 00:53:51.079: INFO: Pod "security-context-fb9147bb-ce8c-48a4-ac71-54e016b4b7f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00505863s Oct 30 00:53:53.082: INFO: Pod "security-context-fb9147bb-ce8c-48a4-ac71-54e016b4b7f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00835051s Oct 30 00:53:55.086: INFO: Pod "security-context-fb9147bb-ce8c-48a4-ac71-54e016b4b7f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011929765s STEP: Saw pod success Oct 30 00:53:55.086: INFO: Pod "security-context-fb9147bb-ce8c-48a4-ac71-54e016b4b7f0" satisfied condition "Succeeded or Failed" Oct 30 00:53:55.088: INFO: Trying to get logs from node node1 pod security-context-fb9147bb-ce8c-48a4-ac71-54e016b4b7f0 container test-container: STEP: delete the pod Oct 30 00:53:55.098: INFO: Waiting for pod security-context-fb9147bb-ce8c-48a4-ac71-54e016b4b7f0 to disappear Oct 30 00:53:55.100: INFO: Pod security-context-fb9147bb-ce8c-48a4-ac71-54e016b4b7f0 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:55.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3774" for this suite. • [SLOW TEST:6.064 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":70,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:52.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-940.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-940.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-940.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-940.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-940.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-940.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 00:53:58.215: INFO: DNS probes using dns-940/dns-test-23c0e75f-6df4-477e-93a3-3eaf66c547c8 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:58.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-940" for this suite. • [SLOW TEST:6.072 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":214,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:54.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-e8a99960-61ed-4bc8-8ead-7c783dc370cf STEP: Creating a pod to test consume configMaps Oct 30 00:53:54.412: INFO: Waiting up to 5m0s for pod "pod-configmaps-413a435b-2c5a-4a08-a336-7bb38994ad1c" in namespace "configmap-5117" to be "Succeeded or Failed" Oct 30 00:53:54.413: INFO: Pod "pod-configmaps-413a435b-2c5a-4a08-a336-7bb38994ad1c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.827168ms Oct 30 00:53:56.417: INFO: Pod "pod-configmaps-413a435b-2c5a-4a08-a336-7bb38994ad1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005287665s Oct 30 00:53:58.420: INFO: Pod "pod-configmaps-413a435b-2c5a-4a08-a336-7bb38994ad1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008679605s STEP: Saw pod success Oct 30 00:53:58.420: INFO: Pod "pod-configmaps-413a435b-2c5a-4a08-a336-7bb38994ad1c" satisfied condition "Succeeded or Failed" Oct 30 00:53:58.423: INFO: Trying to get logs from node node1 pod pod-configmaps-413a435b-2c5a-4a08-a336-7bb38994ad1c container agnhost-container: STEP: delete the pod Oct 30 00:53:58.434: INFO: Waiting for pod pod-configmaps-413a435b-2c5a-4a08-a336-7bb38994ad1c to disappear Oct 30 00:53:58.436: INFO: Pod pod-configmaps-413a435b-2c5a-4a08-a336-7bb38994ad1c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:53:58.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5117" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":215,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:43.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Oct 30 00:53:43.180: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 00:53:43.180: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 00:53:43.184: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 00:53:43.184: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 00:53:43.191: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 00:53:43.191: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 00:53:43.223: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 00:53:43.223: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 00:53:50.432: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 and labels map[test-deployment-static:true] Oct 30 00:53:50.432: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 and labels map[test-deployment-static:true] Oct 30 00:53:50.630: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Oct 30 00:53:50.635: INFO: observed event type ADDED STEP: waiting for Replicas to scale Oct 30 00:53:50.637: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 Oct 30 00:53:50.637: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 Oct 30 00:53:50.637: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 Oct 30 00:53:50.637: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 Oct 30 00:53:50.637: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 Oct 30 00:53:50.637: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 Oct 30 00:53:50.637: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 Oct 30 00:53:50.637: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 0 Oct 30 00:53:50.637: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 Oct 30 00:53:50.637: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 Oct 30 00:53:50.637: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 Oct 30 00:53:50.637: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 Oct 30 00:53:50.637: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 Oct 30 00:53:50.637: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 Oct 30 00:53:50.641: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 Oct 30 00:53:50.641: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 Oct 30 00:53:50.649: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 Oct 30 00:53:50.649: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 Oct 30 00:53:50.656: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 Oct 30 00:53:50.656: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 Oct 30 00:53:50.663: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 Oct 30 00:53:50.663: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 Oct 30 00:53:54.577: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 Oct 30 00:53:54.577: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 Oct 30 00:53:54.589: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 STEP: listing Deployments Oct 30 00:53:54.594: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Oct 30 00:53:54.605: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Oct 30 00:53:54.612: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 00:53:54.612: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 00:53:54.618: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 00:53:54.628: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 00:53:54.633: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 00:53:54.636: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 00:53:57.901: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 00:53:57.913: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 00:53:57.917: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 00:53:57.926: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 00:54:02.000: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Oct 30 00:54:02.022: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 Oct 30 00:54:02.022: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 Oct 30 00:54:02.022: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 Oct 30 00:54:02.022: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 Oct 30 00:54:02.022: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 Oct 30 00:54:02.022: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 1 Oct 30 00:54:02.022: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 Oct 30 00:54:02.022: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 Oct 30 00:54:02.022: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 Oct 30 00:54:02.022: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 2 Oct 30 00:54:02.022: INFO: observed Deployment test-deployment in namespace deployment-8434 with ReadyReplicas 3 STEP: deleting the Deployment Oct 30 00:54:02.028: INFO: observed event type MODIFIED Oct 30 00:54:02.028: INFO: observed event type MODIFIED Oct 30 00:54:02.028: INFO: observed event type MODIFIED Oct 30 00:54:02.028: INFO: observed event type MODIFIED Oct 30 00:54:02.029: INFO: observed event type MODIFIED Oct 30 00:54:02.029: INFO: observed event type MODIFIED Oct 30 00:54:02.029: INFO: observed event type MODIFIED Oct 30 00:54:02.029: INFO: observed event type MODIFIED Oct 30 00:54:02.029: INFO: observed event type MODIFIED Oct 30 00:54:02.029: INFO: observed event type MODIFIED Oct 30 00:54:02.029: INFO: observed event type MODIFIED Oct 30 00:54:02.029: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 30 00:54:02.034: INFO: Log out all the ReplicaSets if there is no deployment created Oct 30 00:54:02.037: INFO: ReplicaSet "test-deployment-748588b7cd": &ReplicaSet{ObjectMeta:{test-deployment-748588b7cd deployment-8434 ffc1fa69-c700-4d82-9292-a15ae1a39fa1 61980 4 2021-10-30 00:53:50 +0000 UTC map[pod-template-hash:748588b7cd test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment a47bed53-a5c8-47dc-98ae-504ab04ddff1 0xc004b2b2c7 0xc004b2b2c8}] [] [{kube-controller-manager Update apps/v1 2021-10-30 00:54:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a47bed53-a5c8-47dc-98ae-504ab04ddff1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 748588b7cd,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:748588b7cd test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.4.1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004b2b330 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 00:54:02.040: INFO: ReplicaSet "test-deployment-7b4c744884": &ReplicaSet{ObjectMeta:{test-deployment-7b4c744884 deployment-8434 94f0cfa2-c441-4c79-b5fb-c67ddb3adb05 61712 3 2021-10-30 00:53:43 +0000 UTC map[pod-template-hash:7b4c744884 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment a47bed53-a5c8-47dc-98ae-504ab04ddff1 0xc004b2b397 0xc004b2b398}] [] [{kube-controller-manager Update apps/v1 2021-10-30 00:53:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a47bed53-a5c8-47dc-98ae-504ab04ddff1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b4c744884,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7b4c744884 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004b2b400 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 00:54:02.042: INFO: ReplicaSet "test-deployment-85d87c6f4b": &ReplicaSet{ObjectMeta:{test-deployment-85d87c6f4b deployment-8434 e549f645-f1fa-4a4d-ab08-c869f60a121a 61972 2 2021-10-30 00:53:54 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment a47bed53-a5c8-47dc-98ae-504ab04ddff1 0xc004b2b467 0xc004b2b468}] [] [{kube-controller-manager Update apps/v1 2021-10-30 00:53:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a47bed53-a5c8-47dc-98ae-504ab04ddff1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 85d87c6f4b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004b2b4d0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} Oct 30 00:54:02.045: INFO: pod: "test-deployment-85d87c6f4b-fttbq": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-fttbq test-deployment-85d87c6f4b- deployment-8434 ef222acf-bd76-4b65-83b6-3139a8502b2f 61802 0 2021-10-30 00:53:54 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.189" ], "mac": "42:bb:05:8f:5a:ea", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.189" ], "mac": "42:bb:05:8f:5a:ea", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b e549f645-f1fa-4a4d-ab08-c869f60a121a 0xc004b2bb57 0xc004b2bb58}] [] [{kube-controller-manager Update v1 2021-10-30 00:53:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e549f645-f1fa-4a4d-ab08-c869f60a121a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 00:53:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 00:53:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.189\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zrzl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zrzl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.189,StartTime:2021-10-30 00:53:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 00:53:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://c6ed4103e40c1b245c4e695d1066f881f7804bb156cafb2d22886ea542ed6a7e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.189,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 00:54:02.045: INFO: pod: "test-deployment-85d87c6f4b-twcmj": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-twcmj test-deployment-85d87c6f4b- deployment-8434 34c0bcb0-3dbb-492a-9111-790cd08c819d 61971 0 2021-10-30 00:53:57 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.88" ], "mac": "b6:4e:28:33:ad:28", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.88" ], "mac": "b6:4e:28:33:ad:28", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b e549f645-f1fa-4a4d-ab08-c869f60a121a 0xc004b2bd4f 0xc004b2bd60}] [] [{kube-controller-manager Update v1 2021-10-30 00:53:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e549f645-f1fa-4a4d-ab08-c869f60a121a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 00:53:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 00:54:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.88\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5ndl6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5ndl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:54:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:54:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:53:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.88,StartTime:2021-10-30 00:53:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 00:54:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://abcd5d67ba903b0f120e3159906b0d638a04f7e0e3ca181f27c1e07904fb675f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.88,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:02.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8434" for this suite. • [SLOW TEST:18.901 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":6,"skipped":142,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:49.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 00:53:49.834: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 00:53:51.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152029, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152029, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152029, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152029, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 00:53:54.854: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:53:54.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6835-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:02.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9264" for this suite. STEP: Destroying namespace "webhook-9264-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.555 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":9,"skipped":88,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:44.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:53:44.045: INFO: The status of Pod test-webserver-568409c7-af61-40b6-b765-2f94acebcd08 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:46.047: INFO: The status of Pod test-webserver-568409c7-af61-40b6-b765-2f94acebcd08 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:48.047: INFO: The status of Pod test-webserver-568409c7-af61-40b6-b765-2f94acebcd08 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:50.049: INFO: The status of Pod test-webserver-568409c7-af61-40b6-b765-2f94acebcd08 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:52.047: INFO: The status of Pod test-webserver-568409c7-af61-40b6-b765-2f94acebcd08 is Running (Ready = false) Oct 30 00:53:54.047: INFO: The status of Pod test-webserver-568409c7-af61-40b6-b765-2f94acebcd08 is Running (Ready = false) Oct 30 00:53:56.048: INFO: The status of Pod test-webserver-568409c7-af61-40b6-b765-2f94acebcd08 is Running (Ready = false) Oct 30 00:53:58.048: INFO: The status of Pod test-webserver-568409c7-af61-40b6-b765-2f94acebcd08 is Running (Ready = false) Oct 30 00:54:00.049: INFO: The status of Pod test-webserver-568409c7-af61-40b6-b765-2f94acebcd08 is Running (Ready = false) Oct 30 00:54:02.048: INFO: The status of Pod test-webserver-568409c7-af61-40b6-b765-2f94acebcd08 is Running (Ready = false) Oct 30 00:54:04.047: INFO: The status of Pod test-webserver-568409c7-af61-40b6-b765-2f94acebcd08 is Running (Ready = false) Oct 30 00:54:06.048: INFO: The status of Pod test-webserver-568409c7-af61-40b6-b765-2f94acebcd08 is Running (Ready = true) Oct 30 00:54:06.050: INFO: Container started at 2021-10-30 00:53:49 +0000 UTC, pod became ready at 2021-10-30 00:54:04 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:06.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7440" for this suite. • [SLOW TEST:22.048 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":175,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:58.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 30 00:54:07.544: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:07.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2783" for this suite. • [SLOW TEST:9.082 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":233,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:02.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 30 00:54:02.104: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:12.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-34" for this suite. • [SLOW TEST:10.214 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":7,"skipped":155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:06.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W1030 00:53:13.007619 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:54:15.024: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:15.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-572" for this suite. • [SLOW TEST:68.079 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":4,"skipped":39,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:15.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-9445/secret-test-6be98e6a-6c21-4a88-8418-05f314e16331 STEP: Creating a pod to test consume secrets Oct 30 00:54:15.141: INFO: Waiting up to 5m0s for pod "pod-configmaps-e9afe93a-baa3-4d19-abb4-d184c386b2d2" in namespace "secrets-9445" to be "Succeeded or Failed" Oct 30 00:54:15.144: INFO: Pod "pod-configmaps-e9afe93a-baa3-4d19-abb4-d184c386b2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.79452ms Oct 30 00:54:17.149: INFO: Pod "pod-configmaps-e9afe93a-baa3-4d19-abb4-d184c386b2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00746327s Oct 30 00:54:19.152: INFO: Pod "pod-configmaps-e9afe93a-baa3-4d19-abb4-d184c386b2d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01053177s STEP: Saw pod success Oct 30 00:54:19.152: INFO: Pod "pod-configmaps-e9afe93a-baa3-4d19-abb4-d184c386b2d2" satisfied condition "Succeeded or Failed" Oct 30 00:54:19.155: INFO: Trying to get logs from node node2 pod pod-configmaps-e9afe93a-baa3-4d19-abb4-d184c386b2d2 container env-test: STEP: delete the pod Oct 30 00:54:19.168: INFO: Waiting for pod pod-configmaps-e9afe93a-baa3-4d19-abb4-d184c386b2d2 to disappear Oct 30 00:54:19.170: INFO: Pod pod-configmaps-e9afe93a-baa3-4d19-abb4-d184c386b2d2 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:19.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9445" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":81,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:07.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Oct 30 00:54:07.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1091 create -f -' Oct 30 00:54:07.977: INFO: stderr: "" Oct 30 00:54:07.977: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 30 00:54:07.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1091 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 00:54:08.139: INFO: stderr: "" Oct 30 00:54:08.139: INFO: stdout: "update-demo-nautilus-8clrn update-demo-nautilus-zwnfx " Oct 30 00:54:08.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1091 get pods update-demo-nautilus-8clrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 00:54:08.303: INFO: stderr: "" Oct 30 00:54:08.303: INFO: stdout: "" Oct 30 00:54:08.303: INFO: update-demo-nautilus-8clrn is created but not running Oct 30 00:54:13.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1091 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 00:54:13.478: INFO: stderr: "" Oct 30 00:54:13.478: INFO: stdout: "update-demo-nautilus-8clrn update-demo-nautilus-zwnfx " Oct 30 00:54:13.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1091 get pods update-demo-nautilus-8clrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 00:54:13.626: INFO: stderr: "" Oct 30 00:54:13.626: INFO: stdout: "" Oct 30 00:54:13.626: INFO: update-demo-nautilus-8clrn is created but not running Oct 30 00:54:18.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1091 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 00:54:18.795: INFO: stderr: "" Oct 30 00:54:18.795: INFO: stdout: "update-demo-nautilus-8clrn update-demo-nautilus-zwnfx " Oct 30 00:54:18.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1091 get pods update-demo-nautilus-8clrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 00:54:18.959: INFO: stderr: "" Oct 30 00:54:18.959: INFO: stdout: "true" Oct 30 00:54:18.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1091 get pods update-demo-nautilus-8clrn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 30 00:54:19.112: INFO: stderr: "" Oct 30 00:54:19.112: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 30 00:54:19.112: INFO: validating pod update-demo-nautilus-8clrn Oct 30 00:54:19.115: INFO: got data: { "image": "nautilus.jpg" } Oct 30 00:54:19.115: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 30 00:54:19.115: INFO: update-demo-nautilus-8clrn is verified up and running Oct 30 00:54:19.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1091 get pods update-demo-nautilus-zwnfx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 00:54:19.262: INFO: stderr: "" Oct 30 00:54:19.262: INFO: stdout: "true" Oct 30 00:54:19.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1091 get pods update-demo-nautilus-zwnfx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 30 00:54:19.417: INFO: stderr: "" Oct 30 00:54:19.417: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 30 00:54:19.417: INFO: validating pod update-demo-nautilus-zwnfx Oct 30 00:54:19.421: INFO: got data: { "image": "nautilus.jpg" } Oct 30 00:54:19.421: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 30 00:54:19.421: INFO: update-demo-nautilus-zwnfx is verified up and running STEP: using delete to clean up resources Oct 30 00:54:19.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1091 delete --grace-period=0 --force -f -' Oct 30 00:54:19.557: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 00:54:19.557: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 30 00:54:19.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1091 get rc,svc -l name=update-demo --no-headers' Oct 30 00:54:19.744: INFO: stderr: "No resources found in kubectl-1091 namespace.\n" Oct 30 00:54:19.744: INFO: stdout: "" Oct 30 00:54:19.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1091 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 30 00:54:19.900: INFO: stderr: "" Oct 30 00:54:19.900: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:19.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1091" for this suite. • [SLOW TEST:12.329 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":9,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:12.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:54:12.449: INFO: Waiting up to 5m0s for pod "downwardapi-volume-735eb18a-7620-424f-83bd-e92d6ac76799" in namespace "projected-8433" to be "Succeeded or Failed" Oct 30 00:54:12.451: INFO: Pod "downwardapi-volume-735eb18a-7620-424f-83bd-e92d6ac76799": Phase="Pending", Reason="", readiness=false. Elapsed: 1.942111ms Oct 30 00:54:14.456: INFO: Pod "downwardapi-volume-735eb18a-7620-424f-83bd-e92d6ac76799": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006598387s Oct 30 00:54:16.461: INFO: Pod "downwardapi-volume-735eb18a-7620-424f-83bd-e92d6ac76799": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011349734s Oct 30 00:54:18.464: INFO: Pod "downwardapi-volume-735eb18a-7620-424f-83bd-e92d6ac76799": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015084186s Oct 30 00:54:20.468: INFO: Pod "downwardapi-volume-735eb18a-7620-424f-83bd-e92d6ac76799": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019218966s STEP: Saw pod success Oct 30 00:54:20.469: INFO: Pod "downwardapi-volume-735eb18a-7620-424f-83bd-e92d6ac76799" satisfied condition "Succeeded or Failed" Oct 30 00:54:20.471: INFO: Trying to get logs from node node1 pod downwardapi-volume-735eb18a-7620-424f-83bd-e92d6ac76799 container client-container: STEP: delete the pod Oct 30 00:54:20.511: INFO: Waiting for pod downwardapi-volume-735eb18a-7620-424f-83bd-e92d6ac76799 to disappear Oct 30 00:54:20.513: INFO: Pod downwardapi-volume-735eb18a-7620-424f-83bd-e92d6ac76799 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:20.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8433" for this suite. • [SLOW TEST:8.105 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:19.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created Oct 30 00:54:20.006: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:54:22.009: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:54:24.010: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:25.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5047" for this suite. • [SLOW TEST:5.064 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":10,"skipped":276,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:19.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:54:19.231: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ce8fcb2-3a26-4363-9e65-98af169dbad6" in namespace "projected-2474" to be "Succeeded or Failed" Oct 30 00:54:19.233: INFO: Pod "downwardapi-volume-3ce8fcb2-3a26-4363-9e65-98af169dbad6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015656ms Oct 30 00:54:21.236: INFO: Pod "downwardapi-volume-3ce8fcb2-3a26-4363-9e65-98af169dbad6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00568829s Oct 30 00:54:23.242: INFO: Pod "downwardapi-volume-3ce8fcb2-3a26-4363-9e65-98af169dbad6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010965066s Oct 30 00:54:25.246: INFO: Pod "downwardapi-volume-3ce8fcb2-3a26-4363-9e65-98af169dbad6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015587075s STEP: Saw pod success Oct 30 00:54:25.246: INFO: Pod "downwardapi-volume-3ce8fcb2-3a26-4363-9e65-98af169dbad6" satisfied condition "Succeeded or Failed" Oct 30 00:54:25.249: INFO: Trying to get logs from node node1 pod downwardapi-volume-3ce8fcb2-3a26-4363-9e65-98af169dbad6 container client-container: STEP: delete the pod Oct 30 00:54:25.270: INFO: Waiting for pod downwardapi-volume-3ce8fcb2-3a26-4363-9e65-98af169dbad6 to disappear Oct 30 00:54:25.274: INFO: Pod downwardapi-volume-3ce8fcb2-3a26-4363-9e65-98af169dbad6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:25.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2474" for this suite. • [SLOW TEST:6.090 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":89,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:25.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:54:25.074: INFO: The status of Pod server-envvars-d6b955d8-60ff-4763-8c34-c2e0c4d7f0bc is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:54:27.077: INFO: The status of Pod server-envvars-d6b955d8-60ff-4763-8c34-c2e0c4d7f0bc is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:54:29.079: INFO: The status of Pod server-envvars-d6b955d8-60ff-4763-8c34-c2e0c4d7f0bc is Running (Ready = true) Oct 30 00:54:29.101: INFO: Waiting up to 5m0s for pod "client-envvars-96aee2d1-81c1-4752-a383-f52a35a6cf99" in namespace "pods-5239" to be "Succeeded or Failed" Oct 30 00:54:29.103: INFO: Pod "client-envvars-96aee2d1-81c1-4752-a383-f52a35a6cf99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17082ms Oct 30 00:54:31.109: INFO: Pod "client-envvars-96aee2d1-81c1-4752-a383-f52a35a6cf99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00800825s Oct 30 00:54:33.112: INFO: Pod "client-envvars-96aee2d1-81c1-4752-a383-f52a35a6cf99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011664895s STEP: Saw pod success Oct 30 00:54:33.113: INFO: Pod "client-envvars-96aee2d1-81c1-4752-a383-f52a35a6cf99" satisfied condition "Succeeded or Failed" Oct 30 00:54:33.115: INFO: Trying to get logs from node node2 pod client-envvars-96aee2d1-81c1-4752-a383-f52a35a6cf99 container env3cont: STEP: delete the pod Oct 30 00:54:33.129: INFO: Waiting for pod client-envvars-96aee2d1-81c1-4752-a383-f52a35a6cf99 to disappear Oct 30 00:54:33.132: INFO: Pod client-envvars-96aee2d1-81c1-4752-a383-f52a35a6cf99 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:33.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5239" for this suite. • [SLOW TEST:8.098 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":278,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:02.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted Oct 30 00:54:28.084: INFO: EndpointSlice for Service endpointslice-9978/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:38.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-9978" for this suite. • [SLOW TEST:35.112 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":10,"skipped":105,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:38.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:54:38.160: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4990813b-68a0-4bb9-bf1e-6e554b3f01fa" in namespace "downward-api-7196" to be "Succeeded or Failed" Oct 30 00:54:38.162: INFO: Pod "downwardapi-volume-4990813b-68a0-4bb9-bf1e-6e554b3f01fa": Phase="Pending", Reason="", readiness=false. Elapsed: 1.932023ms Oct 30 00:54:40.166: INFO: Pod "downwardapi-volume-4990813b-68a0-4bb9-bf1e-6e554b3f01fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005541956s Oct 30 00:54:42.169: INFO: Pod "downwardapi-volume-4990813b-68a0-4bb9-bf1e-6e554b3f01fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008660856s Oct 30 00:54:44.172: INFO: Pod "downwardapi-volume-4990813b-68a0-4bb9-bf1e-6e554b3f01fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01167863s STEP: Saw pod success Oct 30 00:54:44.172: INFO: Pod "downwardapi-volume-4990813b-68a0-4bb9-bf1e-6e554b3f01fa" satisfied condition "Succeeded or Failed" Oct 30 00:54:44.174: INFO: Trying to get logs from node node1 pod downwardapi-volume-4990813b-68a0-4bb9-bf1e-6e554b3f01fa container client-container: STEP: delete the pod Oct 30 00:54:44.190: INFO: Waiting for pod downwardapi-volume-4990813b-68a0-4bb9-bf1e-6e554b3f01fa to disappear Oct 30 00:54:44.192: INFO: Pod downwardapi-volume-4990813b-68a0-4bb9-bf1e-6e554b3f01fa no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:44.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7196" for this suite. • [SLOW TEST:6.073 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:33.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:54:33.192: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 30 00:54:42.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9308 --namespace=crd-publish-openapi-9308 create -f -' Oct 30 00:54:42.678: INFO: stderr: "" Oct 30 00:54:42.678: INFO: stdout: "e2e-test-crd-publish-openapi-4647-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 30 00:54:42.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9308 --namespace=crd-publish-openapi-9308 delete e2e-test-crd-publish-openapi-4647-crds test-cr' Oct 30 00:54:42.842: INFO: stderr: "" Oct 30 00:54:42.842: INFO: stdout: "e2e-test-crd-publish-openapi-4647-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Oct 30 00:54:42.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9308 --namespace=crd-publish-openapi-9308 apply -f -' Oct 30 00:54:43.172: INFO: stderr: "" Oct 30 00:54:43.172: INFO: stdout: "e2e-test-crd-publish-openapi-4647-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 30 00:54:43.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9308 --namespace=crd-publish-openapi-9308 delete e2e-test-crd-publish-openapi-4647-crds test-cr' Oct 30 00:54:43.297: INFO: stderr: "" Oct 30 00:54:43.297: INFO: stdout: "e2e-test-crd-publish-openapi-4647-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 30 00:54:43.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9308 explain e2e-test-crd-publish-openapi-4647-crds' Oct 30 00:54:43.618: INFO: stderr: "" Oct 30 00:54:43.618: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4647-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:47.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9308" for this suite. • [SLOW TEST:14.058 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":12,"skipped":291,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:48.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:53:48.837: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:49.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-886" for this suite. • [SLOW TEST:60.810 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":7,"skipped":136,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:44.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 30 00:54:44.307: INFO: The status of Pod labelsupdate008ff6b7-b3bf-4915-a49c-fad8563078fa is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:54:46.311: INFO: The status of Pod labelsupdate008ff6b7-b3bf-4915-a49c-fad8563078fa is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:54:48.311: INFO: The status of Pod labelsupdate008ff6b7-b3bf-4915-a49c-fad8563078fa is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:54:50.312: INFO: The status of Pod labelsupdate008ff6b7-b3bf-4915-a49c-fad8563078fa is Running (Ready = true) Oct 30 00:54:50.829: INFO: Successfully updated pod "labelsupdate008ff6b7-b3bf-4915-a49c-fad8563078fa" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:53.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9841" for this suite. • [SLOW TEST:8.833 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":157,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:49.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:54:49.771: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-2d9987c7-8e08-4bed-bee4-715dba636e5f" in namespace "security-context-test-8386" to be "Succeeded or Failed" Oct 30 00:54:49.777: INFO: Pod "busybox-privileged-false-2d9987c7-8e08-4bed-bee4-715dba636e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.706113ms Oct 30 00:54:51.780: INFO: Pod "busybox-privileged-false-2d9987c7-8e08-4bed-bee4-715dba636e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008878737s Oct 30 00:54:53.784: INFO: Pod "busybox-privileged-false-2d9987c7-8e08-4bed-bee4-715dba636e5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01256441s Oct 30 00:54:53.784: INFO: Pod "busybox-privileged-false-2d9987c7-8e08-4bed-bee4-715dba636e5f" satisfied condition "Succeeded or Failed" Oct 30 00:54:53.877: INFO: Got logs for pod "busybox-privileged-false-2d9987c7-8e08-4bed-bee4-715dba636e5f": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:53.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8386" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":192,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:53.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-5bec4397-8384-4d66-995e-29dfd9e92e63 STEP: Creating a pod to test consume configMaps Oct 30 00:54:53.158: INFO: Waiting up to 5m0s for pod "pod-configmaps-2128b25b-a145-4895-8a65-9792ed839fb6" in namespace "configmap-1803" to be "Succeeded or Failed" Oct 30 00:54:53.162: INFO: Pod "pod-configmaps-2128b25b-a145-4895-8a65-9792ed839fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.995261ms Oct 30 00:54:55.166: INFO: Pod "pod-configmaps-2128b25b-a145-4895-8a65-9792ed839fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007867329s Oct 30 00:54:57.170: INFO: Pod "pod-configmaps-2128b25b-a145-4895-8a65-9792ed839fb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011580367s STEP: Saw pod success Oct 30 00:54:57.170: INFO: Pod "pod-configmaps-2128b25b-a145-4895-8a65-9792ed839fb6" satisfied condition "Succeeded or Failed" Oct 30 00:54:57.173: INFO: Trying to get logs from node node2 pod pod-configmaps-2128b25b-a145-4895-8a65-9792ed839fb6 container agnhost-container: STEP: delete the pod Oct 30 00:54:57.187: INFO: Waiting for pod pod-configmaps-2128b25b-a145-4895-8a65-9792ed839fb6 to disappear Oct 30 00:54:57.189: INFO: Pod pod-configmaps-2128b25b-a145-4895-8a65-9792ed839fb6 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:57.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1803" for this suite. • ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:53.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9835.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9835.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9835.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9835.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9835.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9835.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 00:54:59.964: INFO: DNS probes using dns-9835/dns-test-afcbfa0e-599b-4804-a1c9-0b68cfdca3ef succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:54:59.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9835" for this suite. • [SLOW TEST:6.093 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":167,"failed":0} [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:57.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Oct 30 00:55:03.744: INFO: Successfully updated pod "adopt-release-6g28z" STEP: Checking that the Job readopts the Pod Oct 30 00:55:03.744: INFO: Waiting up to 15m0s for pod "adopt-release-6g28z" in namespace "job-1294" to be "adopted" Oct 30 00:55:03.746: INFO: Pod "adopt-release-6g28z": Phase="Running", Reason="", readiness=true. Elapsed: 2.356607ms Oct 30 00:55:05.749: INFO: Pod "adopt-release-6g28z": Phase="Running", Reason="", readiness=true. Elapsed: 2.005652199s Oct 30 00:55:05.749: INFO: Pod "adopt-release-6g28z" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Oct 30 00:55:06.258: INFO: Successfully updated pod "adopt-release-6g28z" STEP: Checking that the Job releases the Pod Oct 30 00:55:06.258: INFO: Waiting up to 15m0s for pod "adopt-release-6g28z" in namespace "job-1294" to be "released" Oct 30 00:55:06.261: INFO: Pod "adopt-release-6g28z": Phase="Running", Reason="", readiness=true. Elapsed: 2.649755ms Oct 30 00:55:08.265: INFO: Pod "adopt-release-6g28z": Phase="Running", Reason="", readiness=true. Elapsed: 2.006743356s Oct 30 00:55:08.265: INFO: Pod "adopt-release-6g28z" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:08.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1294" for this suite. • [SLOW TEST:11.074 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":14,"skipped":167,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:47.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Oct 30 00:54:47.251: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:10.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5713" for this suite. • [SLOW TEST:22.836 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":13,"skipped":292,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:08.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:55:08.343: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Oct 30 00:55:10.373: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:11.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1967" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":15,"skipped":199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:11.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:11.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5236" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":16,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:20.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3922, will wait for the garbage collector to delete the pods Oct 30 00:54:24.666: INFO: Deleting Job.batch foo took: 4.738042ms Oct 30 00:54:24.766: INFO: Terminating Job.batch foo pods took: 100.480969ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:12.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3922" for this suite. • [SLOW TEST:52.397 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":9,"skipped":260,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:55.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-9053 Oct 30 00:53:55.150: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:57.153: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:59.154: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Oct 30 00:53:59.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9053 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 30 00:53:59.747: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Oct 30 00:53:59.747: INFO: stdout: "iptables" Oct 30 00:53:59.747: INFO: proxyMode: iptables Oct 30 00:53:59.752: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 30 00:53:59.754: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-9053 STEP: creating replication controller affinity-clusterip-timeout in namespace services-9053 I1030 00:53:59.764082 38 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-9053, replica count: 3 I1030 00:54:02.816113 38 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:54:05.816933 38 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:54:08.817626 38 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 00:54:08.823: INFO: Creating new exec pod Oct 30 00:54:15.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9053 exec execpod-affinity6gb26 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Oct 30 00:54:16.182: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Oct 30 00:54:16.182: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 00:54:16.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9053 exec execpod-affinity6gb26 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.41.156 80' Oct 30 00:54:16.708: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.41.156 80\nConnection to 10.233.41.156 80 port [tcp/http] succeeded!\n" Oct 30 00:54:16.708: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 00:54:16.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9053 exec execpod-affinity6gb26 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.41.156:80/ ; done' Oct 30 00:54:17.014: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n" Oct 30 00:54:17.014: INFO: stdout: "\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj\naffinity-clusterip-timeout-6sqjj" Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.014: INFO: Received response from host: affinity-clusterip-timeout-6sqjj Oct 30 00:54:17.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9053 exec execpod-affinity6gb26 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.41.156:80/' Oct 30 00:54:17.260: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n" Oct 30 00:54:17.260: INFO: stdout: "affinity-clusterip-timeout-6sqjj" Oct 30 00:54:37.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9053 exec execpod-affinity6gb26 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.41.156:80/' Oct 30 00:54:37.497: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n" Oct 30 00:54:37.497: INFO: stdout: "affinity-clusterip-timeout-6sqjj" Oct 30 00:54:57.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9053 exec execpod-affinity6gb26 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.41.156:80/' Oct 30 00:54:57.832: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.41.156:80/\n" Oct 30 00:54:57.832: INFO: stdout: "affinity-clusterip-timeout-nwwz5" Oct 30 00:54:57.832: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-9053, will wait for the garbage collector to delete the pods Oct 30 00:54:57.897: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 3.584476ms Oct 30 00:54:57.998: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.734614ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:13.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9053" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:78.299 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":73,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:10.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-51fd2b11-7fac-48fd-bdea-2ee4dba755d5 STEP: Creating a pod to test consume secrets Oct 30 00:55:10.115: INFO: Waiting up to 5m0s for pod "pod-secrets-a227cfd7-3854-410b-a2aa-a33664e6d304" in namespace "secrets-3356" to be "Succeeded or Failed" Oct 30 00:55:10.118: INFO: Pod "pod-secrets-a227cfd7-3854-410b-a2aa-a33664e6d304": Phase="Pending", Reason="", readiness=false. Elapsed: 2.527755ms Oct 30 00:55:12.121: INFO: Pod "pod-secrets-a227cfd7-3854-410b-a2aa-a33664e6d304": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005452591s Oct 30 00:55:14.124: INFO: Pod "pod-secrets-a227cfd7-3854-410b-a2aa-a33664e6d304": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008642984s Oct 30 00:55:16.128: INFO: Pod "pod-secrets-a227cfd7-3854-410b-a2aa-a33664e6d304": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01245461s STEP: Saw pod success Oct 30 00:55:16.128: INFO: Pod "pod-secrets-a227cfd7-3854-410b-a2aa-a33664e6d304" satisfied condition "Succeeded or Failed" Oct 30 00:55:16.130: INFO: Trying to get logs from node node1 pod pod-secrets-a227cfd7-3854-410b-a2aa-a33664e6d304 container secret-volume-test: STEP: delete the pod Oct 30 00:55:16.143: INFO: Waiting for pod pod-secrets-a227cfd7-3854-410b-a2aa-a33664e6d304 to disappear Oct 30 00:55:16.145: INFO: Pod pod-secrets-a227cfd7-3854-410b-a2aa-a33664e6d304 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:16.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3356" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":299,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:16.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions Oct 30 00:55:16.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7338 api-versions' Oct 30 00:55:16.284: INFO: stderr: "" Oct 30 00:55:16.284: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:16.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7338" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":15,"skipped":309,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:13.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 30 00:55:21.080: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:21.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-776" for this suite. • [SLOW TEST:8.082 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":277,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:11.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod Oct 30 00:55:11.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9888 create -f -' Oct 30 00:55:11.960: INFO: stderr: "" Oct 30 00:55:11.960: INFO: stdout: "pod/pause created\n" Oct 30 00:55:11.960: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Oct 30 00:55:11.960: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9888" to be "running and ready" Oct 30 00:55:11.965: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.611182ms Oct 30 00:55:13.969: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009111993s Oct 30 00:55:15.972: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011589225s Oct 30 00:55:17.975: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01462386s Oct 30 00:55:19.979: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.019092867s Oct 30 00:55:19.980: INFO: Pod "pause" satisfied condition "running and ready" Oct 30 00:55:19.980: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod Oct 30 00:55:19.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9888 label pods pause testing-label=testing-label-value' Oct 30 00:55:20.166: INFO: stderr: "" Oct 30 00:55:20.166: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Oct 30 00:55:20.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9888 get pod pause -L testing-label' Oct 30 00:55:20.315: INFO: stderr: "" Oct 30 00:55:20.315: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Oct 30 00:55:20.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9888 label pods pause testing-label-' Oct 30 00:55:20.485: INFO: stderr: "" Oct 30 00:55:20.485: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Oct 30 00:55:20.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9888 get pod pause -L testing-label' Oct 30 00:55:20.652: INFO: stderr: "" Oct 30 00:55:20.652: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources Oct 30 00:55:20.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9888 delete --grace-period=0 --force -f -' Oct 30 00:55:20.790: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 00:55:20.790: INFO: stdout: "pod \"pause\" force deleted\n" Oct 30 00:55:20.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9888 get rc,svc -l name=pause --no-headers' Oct 30 00:55:20.973: INFO: stderr: "No resources found in kubectl-9888 namespace.\n" Oct 30 00:55:20.973: INFO: stdout: "" Oct 30 00:55:20.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9888 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 30 00:55:21.122: INFO: stderr: "" Oct 30 00:55:21.122: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:21.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9888" for this suite. • [SLOW TEST:9.596 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":17,"skipped":261,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:13.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-b0cdffed-d96a-4eb5-ae43-0f3002d8d17d STEP: Creating a pod to test consume secrets Oct 30 00:55:13.496: INFO: Waiting up to 5m0s for pod "pod-secrets-03bd2968-bc94-44aa-a35c-db7557e9e585" in namespace "secrets-707" to be "Succeeded or Failed" Oct 30 00:55:13.498: INFO: Pod "pod-secrets-03bd2968-bc94-44aa-a35c-db7557e9e585": Phase="Pending", Reason="", readiness=false. Elapsed: 1.89168ms Oct 30 00:55:15.501: INFO: Pod "pod-secrets-03bd2968-bc94-44aa-a35c-db7557e9e585": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004729642s Oct 30 00:55:17.504: INFO: Pod "pod-secrets-03bd2968-bc94-44aa-a35c-db7557e9e585": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008110802s Oct 30 00:55:19.509: INFO: Pod "pod-secrets-03bd2968-bc94-44aa-a35c-db7557e9e585": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012144773s Oct 30 00:55:21.512: INFO: Pod "pod-secrets-03bd2968-bc94-44aa-a35c-db7557e9e585": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015736601s Oct 30 00:55:23.516: INFO: Pod "pod-secrets-03bd2968-bc94-44aa-a35c-db7557e9e585": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.019637621s STEP: Saw pod success Oct 30 00:55:23.516: INFO: Pod "pod-secrets-03bd2968-bc94-44aa-a35c-db7557e9e585" satisfied condition "Succeeded or Failed" Oct 30 00:55:23.519: INFO: Trying to get logs from node node2 pod pod-secrets-03bd2968-bc94-44aa-a35c-db7557e9e585 container secret-volume-test: STEP: delete the pod Oct 30 00:55:23.534: INFO: Waiting for pod pod-secrets-03bd2968-bc94-44aa-a35c-db7557e9e585 to disappear Oct 30 00:55:23.536: INFO: Pod pod-secrets-03bd2968-bc94-44aa-a35c-db7557e9e585 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:23.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-707" for this suite. STEP: Destroying namespace "secret-namespace-9273" for this suite. • [SLOW TEST:10.103 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":88,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:23.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:55:23.593: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5af402f-1983-4938-b0bc-6d296ccb71ee" in namespace "projected-9263" to be "Succeeded or Failed" Oct 30 00:55:23.595: INFO: Pod "downwardapi-volume-a5af402f-1983-4938-b0bc-6d296ccb71ee": Phase="Pending", Reason="", readiness=false. Elapsed: 1.941629ms Oct 30 00:55:25.600: INFO: Pod "downwardapi-volume-a5af402f-1983-4938-b0bc-6d296ccb71ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007331361s Oct 30 00:55:27.604: INFO: Pod "downwardapi-volume-a5af402f-1983-4938-b0bc-6d296ccb71ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010890895s STEP: Saw pod success Oct 30 00:55:27.604: INFO: Pod "downwardapi-volume-a5af402f-1983-4938-b0bc-6d296ccb71ee" satisfied condition "Succeeded or Failed" Oct 30 00:55:27.606: INFO: Trying to get logs from node node2 pod downwardapi-volume-a5af402f-1983-4938-b0bc-6d296ccb71ee container client-container: STEP: delete the pod Oct 30 00:55:27.618: INFO: Waiting for pod downwardapi-volume-a5af402f-1983-4938-b0bc-6d296ccb71ee to disappear Oct 30 00:55:27.620: INFO: Pod downwardapi-volume-a5af402f-1983-4938-b0bc-6d296ccb71ee no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:27.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9263" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":94,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:16.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4328.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4328.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4328.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4328.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4328.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4328.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 00:55:24.372: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local from pod dns-4328/dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3: the server could not find the requested resource (get pods dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3) Oct 30 00:55:24.374: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local from pod dns-4328/dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3: the server could not find the requested resource (get pods dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3) Oct 30 00:55:24.377: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4328.svc.cluster.local from pod dns-4328/dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3: the server could not find the requested resource (get pods dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3) Oct 30 00:55:24.380: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4328.svc.cluster.local from pod dns-4328/dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3: the server could not find the requested resource (get pods dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3) Oct 30 00:55:24.387: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local from pod dns-4328/dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3: the server could not find the requested resource (get pods dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3) Oct 30 00:55:24.389: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local from pod dns-4328/dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3: the server could not find the requested resource (get pods dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3) Oct 30 00:55:24.393: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4328.svc.cluster.local from pod dns-4328/dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3: the server could not find the requested resource (get pods dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3) Oct 30 00:55:24.395: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4328.svc.cluster.local from pod dns-4328/dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3: the server could not find the requested resource (get pods dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3) Oct 30 00:55:24.401: INFO: Lookups using dns-4328/dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4328.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4328.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4328.svc.cluster.local jessie_udp@dns-test-service-2.dns-4328.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4328.svc.cluster.local] Oct 30 00:55:29.435: INFO: DNS probes using dns-4328/dns-test-2f4e8134-eb2d-4280-86cc-aabea8bd10b3 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:29.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4328" for this suite. • [SLOW TEST:13.131 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":16,"skipped":326,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:21.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-93e8c565-2a36-4ebb-8a01-846dfedc4cf7 Oct 30 00:55:21.172: INFO: Pod name my-hostname-basic-93e8c565-2a36-4ebb-8a01-846dfedc4cf7: Found 0 pods out of 1 Oct 30 00:55:26.175: INFO: Pod name my-hostname-basic-93e8c565-2a36-4ebb-8a01-846dfedc4cf7: Found 1 pods out of 1 Oct 30 00:55:26.175: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-93e8c565-2a36-4ebb-8a01-846dfedc4cf7" are running Oct 30 00:55:26.177: INFO: Pod "my-hostname-basic-93e8c565-2a36-4ebb-8a01-846dfedc4cf7-hlmlb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 00:55:21 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 00:55:23 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 00:55:23 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 00:55:21 +0000 UTC Reason: Message:}]) Oct 30 00:55:26.178: INFO: Trying to dial the pod Oct 30 00:55:31.189: INFO: Controller my-hostname-basic-93e8c565-2a36-4ebb-8a01-846dfedc4cf7: Got expected result from replica 1 [my-hostname-basic-93e8c565-2a36-4ebb-8a01-846dfedc4cf7-hlmlb]: "my-hostname-basic-93e8c565-2a36-4ebb-8a01-846dfedc4cf7-hlmlb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:31.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9277" for this suite. • [SLOW TEST:10.061 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":18,"skipped":265,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:27.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:55:27.667: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4ec20bd-192d-420f-a944-d6178962cd95" in namespace "projected-5104" to be "Succeeded or Failed" Oct 30 00:55:27.669: INFO: Pod "downwardapi-volume-c4ec20bd-192d-420f-a944-d6178962cd95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088097ms Oct 30 00:55:29.672: INFO: Pod "downwardapi-volume-c4ec20bd-192d-420f-a944-d6178962cd95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005509142s Oct 30 00:55:31.676: INFO: Pod "downwardapi-volume-c4ec20bd-192d-420f-a944-d6178962cd95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009169548s STEP: Saw pod success Oct 30 00:55:31.676: INFO: Pod "downwardapi-volume-c4ec20bd-192d-420f-a944-d6178962cd95" satisfied condition "Succeeded or Failed" Oct 30 00:55:31.678: INFO: Trying to get logs from node node1 pod downwardapi-volume-c4ec20bd-192d-420f-a944-d6178962cd95 container client-container: STEP: delete the pod Oct 30 00:55:31.689: INFO: Waiting for pod downwardapi-volume-c4ec20bd-192d-420f-a944-d6178962cd95 to disappear Oct 30 00:55:31.691: INFO: Pod downwardapi-volume-c4ec20bd-192d-420f-a944-d6178962cd95 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:31.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5104" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":97,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:29.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:55:29.490: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Oct 30 00:55:29.504: INFO: The status of Pod pod-logs-websocket-98f278b6-8ea1-45e2-90f8-ab77e45b5f71 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:55:31.507: INFO: The status of Pod pod-logs-websocket-98f278b6-8ea1-45e2-90f8-ab77e45b5f71 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:55:33.507: INFO: The status of Pod pod-logs-websocket-98f278b6-8ea1-45e2-90f8-ab77e45b5f71 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:33.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9329" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":332,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:31.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:55:31.265: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9ed4dd1-3e9f-4369-9257-3491e81f24c5" in namespace "projected-6015" to be "Succeeded or Failed" Oct 30 00:55:31.267: INFO: Pod "downwardapi-volume-a9ed4dd1-3e9f-4369-9257-3491e81f24c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106945ms Oct 30 00:55:33.270: INFO: Pod "downwardapi-volume-a9ed4dd1-3e9f-4369-9257-3491e81f24c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005705567s Oct 30 00:55:35.276: INFO: Pod "downwardapi-volume-a9ed4dd1-3e9f-4369-9257-3491e81f24c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011042205s STEP: Saw pod success Oct 30 00:55:35.276: INFO: Pod "downwardapi-volume-a9ed4dd1-3e9f-4369-9257-3491e81f24c5" satisfied condition "Succeeded or Failed" Oct 30 00:55:35.279: INFO: Trying to get logs from node node1 pod downwardapi-volume-a9ed4dd1-3e9f-4369-9257-3491e81f24c5 container client-container: STEP: delete the pod Oct 30 00:55:35.391: INFO: Waiting for pod downwardapi-volume-a9ed4dd1-3e9f-4369-9257-3491e81f24c5 to disappear Oct 30 00:55:35.393: INFO: Pod downwardapi-volume-a9ed4dd1-3e9f-4369-9257-3491e81f24c5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:35.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6015" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":282,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:31.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:55:31.757: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1be5d426-62df-4955-acd3-4de3aa0ddba7" in namespace "downward-api-6575" to be "Succeeded or Failed" Oct 30 00:55:31.760: INFO: Pod "downwardapi-volume-1be5d426-62df-4955-acd3-4de3aa0ddba7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.795012ms Oct 30 00:55:33.766: INFO: Pod "downwardapi-volume-1be5d426-62df-4955-acd3-4de3aa0ddba7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008517053s Oct 30 00:55:35.771: INFO: Pod "downwardapi-volume-1be5d426-62df-4955-acd3-4de3aa0ddba7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013340192s STEP: Saw pod success Oct 30 00:55:35.771: INFO: Pod "downwardapi-volume-1be5d426-62df-4955-acd3-4de3aa0ddba7" satisfied condition "Succeeded or Failed" Oct 30 00:55:35.773: INFO: Trying to get logs from node node2 pod downwardapi-volume-1be5d426-62df-4955-acd3-4de3aa0ddba7 container client-container: STEP: delete the pod Oct 30 00:55:35.787: INFO: Waiting for pod downwardapi-volume-1be5d426-62df-4955-acd3-4de3aa0ddba7 to disappear Oct 30 00:55:35.789: INFO: Pod downwardapi-volume-1be5d426-62df-4955-acd3-4de3aa0ddba7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:35.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6575" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":110,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:35.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:35.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1223" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":11,"skipped":115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:35.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:55:36.701: INFO: Checking APIGroup: apiregistration.k8s.io Oct 30 00:55:36.702: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Oct 30 00:55:36.702: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.702: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Oct 30 00:55:36.702: INFO: Checking APIGroup: apps Oct 30 00:55:36.702: INFO: PreferredVersion.GroupVersion: apps/v1 Oct 30 00:55:36.702: INFO: Versions found [{apps/v1 v1}] Oct 30 00:55:36.702: INFO: apps/v1 matches apps/v1 Oct 30 00:55:36.702: INFO: Checking APIGroup: events.k8s.io Oct 30 00:55:36.703: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Oct 30 00:55:36.703: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.703: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Oct 30 00:55:36.703: INFO: Checking APIGroup: authentication.k8s.io Oct 30 00:55:36.704: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Oct 30 00:55:36.704: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.704: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Oct 30 00:55:36.704: INFO: Checking APIGroup: authorization.k8s.io Oct 30 00:55:36.704: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Oct 30 00:55:36.704: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.704: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Oct 30 00:55:36.704: INFO: Checking APIGroup: autoscaling Oct 30 00:55:36.706: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Oct 30 00:55:36.706: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Oct 30 00:55:36.706: INFO: autoscaling/v1 matches autoscaling/v1 Oct 30 00:55:36.706: INFO: Checking APIGroup: batch Oct 30 00:55:36.707: INFO: PreferredVersion.GroupVersion: batch/v1 Oct 30 00:55:36.707: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Oct 30 00:55:36.707: INFO: batch/v1 matches batch/v1 Oct 30 00:55:36.707: INFO: Checking APIGroup: certificates.k8s.io Oct 30 00:55:36.708: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Oct 30 00:55:36.708: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.708: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Oct 30 00:55:36.708: INFO: Checking APIGroup: networking.k8s.io Oct 30 00:55:36.709: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Oct 30 00:55:36.709: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.709: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Oct 30 00:55:36.709: INFO: Checking APIGroup: extensions Oct 30 00:55:36.710: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Oct 30 00:55:36.710: INFO: Versions found [{extensions/v1beta1 v1beta1}] Oct 30 00:55:36.710: INFO: extensions/v1beta1 matches extensions/v1beta1 Oct 30 00:55:36.710: INFO: Checking APIGroup: policy Oct 30 00:55:36.710: INFO: PreferredVersion.GroupVersion: policy/v1 Oct 30 00:55:36.710: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Oct 30 00:55:36.710: INFO: policy/v1 matches policy/v1 Oct 30 00:55:36.710: INFO: Checking APIGroup: rbac.authorization.k8s.io Oct 30 00:55:36.711: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Oct 30 00:55:36.711: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.711: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Oct 30 00:55:36.711: INFO: Checking APIGroup: storage.k8s.io Oct 30 00:55:36.712: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Oct 30 00:55:36.712: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.712: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Oct 30 00:55:36.712: INFO: Checking APIGroup: admissionregistration.k8s.io Oct 30 00:55:36.713: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Oct 30 00:55:36.713: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.713: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Oct 30 00:55:36.713: INFO: Checking APIGroup: apiextensions.k8s.io Oct 30 00:55:36.714: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Oct 30 00:55:36.714: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.714: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Oct 30 00:55:36.714: INFO: Checking APIGroup: scheduling.k8s.io Oct 30 00:55:36.714: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Oct 30 00:55:36.714: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.714: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Oct 30 00:55:36.714: INFO: Checking APIGroup: coordination.k8s.io Oct 30 00:55:36.715: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Oct 30 00:55:36.715: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.715: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Oct 30 00:55:36.715: INFO: Checking APIGroup: node.k8s.io Oct 30 00:55:36.716: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Oct 30 00:55:36.716: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.716: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Oct 30 00:55:36.716: INFO: Checking APIGroup: discovery.k8s.io Oct 30 00:55:36.717: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Oct 30 00:55:36.717: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.717: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Oct 30 00:55:36.717: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Oct 30 00:55:36.718: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Oct 30 00:55:36.718: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.718: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Oct 30 00:55:36.718: INFO: Checking APIGroup: intel.com Oct 30 00:55:36.718: INFO: PreferredVersion.GroupVersion: intel.com/v1 Oct 30 00:55:36.718: INFO: Versions found [{intel.com/v1 v1}] Oct 30 00:55:36.718: INFO: intel.com/v1 matches intel.com/v1 Oct 30 00:55:36.718: INFO: Checking APIGroup: k8s.cni.cncf.io Oct 30 00:55:36.719: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 Oct 30 00:55:36.719: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] Oct 30 00:55:36.719: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 Oct 30 00:55:36.719: INFO: Checking APIGroup: monitoring.coreos.com Oct 30 00:55:36.720: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 Oct 30 00:55:36.720: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] Oct 30 00:55:36.720: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 Oct 30 00:55:36.720: INFO: Checking APIGroup: telemetry.intel.com Oct 30 00:55:36.722: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 Oct 30 00:55:36.723: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] Oct 30 00:55:36.723: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 Oct 30 00:55:36.723: INFO: Checking APIGroup: custom.metrics.k8s.io Oct 30 00:55:36.723: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 Oct 30 00:55:36.723: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] Oct 30 00:55:36.723: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:36.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-4600" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":12,"skipped":154,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:21.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:55:21.133: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Oct 30 00:55:29.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 --namespace=crd-publish-openapi-1793 create -f -' Oct 30 00:55:30.165: INFO: stderr: "" Oct 30 00:55:30.165: INFO: stdout: "e2e-test-crd-publish-openapi-5675-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 30 00:55:30.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 --namespace=crd-publish-openapi-1793 delete e2e-test-crd-publish-openapi-5675-crds test-foo' Oct 30 00:55:30.320: INFO: stderr: "" Oct 30 00:55:30.320: INFO: stdout: "e2e-test-crd-publish-openapi-5675-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Oct 30 00:55:30.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 --namespace=crd-publish-openapi-1793 apply -f -' Oct 30 00:55:30.618: INFO: stderr: "" Oct 30 00:55:30.618: INFO: stdout: "e2e-test-crd-publish-openapi-5675-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 30 00:55:30.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 --namespace=crd-publish-openapi-1793 delete e2e-test-crd-publish-openapi-5675-crds test-foo' Oct 30 00:55:30.785: INFO: stderr: "" Oct 30 00:55:30.785: INFO: stdout: "e2e-test-crd-publish-openapi-5675-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Oct 30 00:55:30.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 --namespace=crd-publish-openapi-1793 create -f -' Oct 30 00:55:31.066: INFO: rc: 1 Oct 30 00:55:31.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 --namespace=crd-publish-openapi-1793 apply -f -' Oct 30 00:55:31.368: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Oct 30 00:55:31.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 --namespace=crd-publish-openapi-1793 create -f -' Oct 30 00:55:31.680: INFO: rc: 1 Oct 30 00:55:31.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 --namespace=crd-publish-openapi-1793 apply -f -' Oct 30 00:55:31.940: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Oct 30 00:55:31.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 explain e2e-test-crd-publish-openapi-5675-crds' Oct 30 00:55:32.266: INFO: stderr: "" Oct 30 00:55:32.266: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5675-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Oct 30 00:55:32.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 explain e2e-test-crd-publish-openapi-5675-crds.metadata' Oct 30 00:55:32.607: INFO: stderr: "" Oct 30 00:55:32.607: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5675-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Oct 30 00:55:32.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 explain e2e-test-crd-publish-openapi-5675-crds.spec' Oct 30 00:55:32.937: INFO: stderr: "" Oct 30 00:55:32.937: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5675-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Oct 30 00:55:32.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 explain e2e-test-crd-publish-openapi-5675-crds.spec.bars' Oct 30 00:55:33.286: INFO: stderr: "" Oct 30 00:55:33.286: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5675-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Oct 30 00:55:33.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 explain e2e-test-crd-publish-openapi-5675-crds.spec.bars2' Oct 30 00:55:33.599: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:37.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1793" for this suite. • [SLOW TEST:16.105 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:33.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all Oct 30 00:55:33.584: INFO: Waiting up to 5m0s for pod "client-containers-41cc29f0-8796-4e42-bb03-31c7732f72e3" in namespace "containers-199" to be "Succeeded or Failed" Oct 30 00:55:33.586: INFO: Pod "client-containers-41cc29f0-8796-4e42-bb03-31c7732f72e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129732ms Oct 30 00:55:35.590: INFO: Pod "client-containers-41cc29f0-8796-4e42-bb03-31c7732f72e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005328155s Oct 30 00:55:37.595: INFO: Pod "client-containers-41cc29f0-8796-4e42-bb03-31c7732f72e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01052276s STEP: Saw pod success Oct 30 00:55:37.595: INFO: Pod "client-containers-41cc29f0-8796-4e42-bb03-31c7732f72e3" satisfied condition "Succeeded or Failed" Oct 30 00:55:37.598: INFO: Trying to get logs from node node1 pod client-containers-41cc29f0-8796-4e42-bb03-31c7732f72e3 container agnhost-container: STEP: delete the pod Oct 30 00:55:37.611: INFO: Waiting for pod client-containers-41cc29f0-8796-4e42-bb03-31c7732f72e3 to disappear Oct 30 00:55:37.613: INFO: Pod client-containers-41cc29f0-8796-4e42-bb03-31c7732f72e3 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:37.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-199" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":339,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:35.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 00:55:35.893: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 00:55:37.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152135, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152135, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152135, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152135, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 00:55:40.915: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:41.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9194" for this suite. STEP: Destroying namespace "webhook-9194-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.643 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":20,"skipped":295,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:36.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:55:36.760: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:44.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9745" for this suite. • [SLOW TEST:8.136 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":13,"skipped":156,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:37.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:55:37.657: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-8f006ca3-bcd8-4d0c-b6b8-cbb73120fe98" in namespace "security-context-test-2261" to be "Succeeded or Failed" Oct 30 00:55:37.660: INFO: Pod "alpine-nnp-false-8f006ca3-bcd8-4d0c-b6b8-cbb73120fe98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.720274ms Oct 30 00:55:39.663: INFO: Pod "alpine-nnp-false-8f006ca3-bcd8-4d0c-b6b8-cbb73120fe98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005999607s Oct 30 00:55:41.668: INFO: Pod "alpine-nnp-false-8f006ca3-bcd8-4d0c-b6b8-cbb73120fe98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010385633s Oct 30 00:55:43.671: INFO: Pod "alpine-nnp-false-8f006ca3-bcd8-4d0c-b6b8-cbb73120fe98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013689965s Oct 30 00:55:45.677: INFO: Pod "alpine-nnp-false-8f006ca3-bcd8-4d0c-b6b8-cbb73120fe98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019831591s Oct 30 00:55:45.677: INFO: Pod "alpine-nnp-false-8f006ca3-bcd8-4d0c-b6b8-cbb73120fe98" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:55:45.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2261" for this suite. • [SLOW TEST:8.071 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":11,"skipped":284,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:37.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-277 STEP: creating service affinity-clusterip in namespace services-277 STEP: creating replication controller affinity-clusterip in namespace services-277 I1030 00:55:37.246658 26 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-277, replica count: 3 I1030 00:55:40.298295 26 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:55:43.299395 26 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 00:55:43.305: INFO: Creating new exec pod Oct 30 00:55:50.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-277 exec execpod-affinity4gpv2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Oct 30 00:55:50.578: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Oct 30 00:55:50.578: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 00:55:50.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-277 exec execpod-affinity4gpv2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.8.49 80' Oct 30 00:55:50.823: INFO: stderr: "+ nc -v -t -w 2 10.233.8.49 80\nConnection to 10.233.8.49 80 port [tcp/http] succeeded!\n+ echo hostName\n" Oct 30 00:55:50.823: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 00:55:50.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-277 exec execpod-affinity4gpv2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.8.49:80/ ; done' Oct 30 00:55:51.128: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.8.49:80/\n" Oct 30 00:55:51.128: INFO: stdout: "\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn\naffinity-clusterip-prgfn" Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Received response from host: affinity-clusterip-prgfn Oct 30 00:55:51.128: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-277, will wait for the garbage collector to delete the pods Oct 30 00:55:51.190: INFO: Deleting ReplicationController affinity-clusterip took: 3.373781ms Oct 30 00:55:51.290: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.472216ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:03.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-277" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:25.792 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:41.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:09.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2890" for this suite. • [SLOW TEST:28.063 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":21,"skipped":304,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:45.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:55:45.757: INFO: Pod name rollover-pod: Found 0 pods out of 1 Oct 30 00:55:50.764: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 30 00:55:52.772: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Oct 30 00:55:54.777: INFO: Creating deployment "test-rollover-deployment" Oct 30 00:55:54.783: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Oct 30 00:55:56.789: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Oct 30 00:55:56.795: INFO: Ensure that both replica sets have 1 created replica Oct 30 00:55:56.802: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Oct 30 00:55:56.810: INFO: Updating deployment test-rollover-deployment Oct 30 00:55:56.810: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Oct 30 00:55:58.816: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Oct 30 00:55:58.822: INFO: Make sure deployment "test-rollover-deployment" is complete Oct 30 00:55:58.827: INFO: all replica sets need to contain the pod-template-hash label Oct 30 00:55:58.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152156, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:56:00.836: INFO: all replica sets need to contain the pod-template-hash label Oct 30 00:56:00.836: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152159, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:56:02.834: INFO: all replica sets need to contain the pod-template-hash label Oct 30 00:56:02.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152159, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:56:04.835: INFO: all replica sets need to contain the pod-template-hash label Oct 30 00:56:04.835: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152159, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:56:06.834: INFO: all replica sets need to contain the pod-template-hash label Oct 30 00:56:06.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152159, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:56:08.833: INFO: all replica sets need to contain the pod-template-hash label Oct 30 00:56:08.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152159, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152154, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:56:10.835: INFO: Oct 30 00:56:10.835: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 30 00:56:10.842: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-2058 90287770-581a-4c27-8ae4-83a66d682b24 65204 2 2021-10-30 00:55:54 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-30 00:55:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 00:56:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0048b2318 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-30 00:55:54 +0000 UTC,LastTransitionTime:2021-10-30 00:55:54 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2021-10-30 00:56:09 +0000 UTC,LastTransitionTime:2021-10-30 00:55:54 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 30 00:56:10.845: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-2058 6559a0a3-b3a0-4979-92f7-2b52bcae3a02 65194 2 2021-10-30 00:55:56 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 90287770-581a-4c27-8ae4-83a66d682b24 0xc0048b29b0 0xc0048b29b1}] [] [{kube-controller-manager Update apps/v1 2021-10-30 00:56:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"90287770-581a-4c27-8ae4-83a66d682b24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0048b2a38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 30 00:56:10.845: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Oct 30 00:56:10.845: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2058 26b78fdd-05ad-4456-b50a-4cc9fda3a865 65203 2 2021-10-30 00:55:45 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 90287770-581a-4c27-8ae4-83a66d682b24 0xc0048b2767 0xc0048b2768}] [] [{e2e.test Update apps/v1 2021-10-30 00:55:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 00:56:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"90287770-581a-4c27-8ae4-83a66d682b24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0048b2808 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 00:56:10.846: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-2058 e76f5c0e-3a68-4324-8b55-cb8881a3a863 64679 2 2021-10-30 00:55:54 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 90287770-581a-4c27-8ae4-83a66d682b24 0xc0048b2887 0xc0048b2888}] [] [{kube-controller-manager Update apps/v1 2021-10-30 00:55:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"90287770-581a-4c27-8ae4-83a66d682b24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0048b2938 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 00:56:10.849: INFO: Pod "test-rollover-deployment-98c5f4599-2jd6s" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-2jd6s test-rollover-deployment-98c5f4599- deployment-2058 a11c0cb7-89e2-4ec2-b570-f5c01922669c 64705 0 2021-10-30 00:55:56 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.225" ], "mac": "1a:19:19:19:c1:08", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.225" ], "mac": "1a:19:19:19:c1:08", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 6559a0a3-b3a0-4979-92f7-2b52bcae3a02 0xc0048b2f9f 0xc0048b2fb0}] [] [{kube-controller-manager Update v1 2021-10-30 00:55:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6559a0a3-b3a0-4979-92f7-2b52bcae3a02\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 00:55:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 00:55:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.225\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rs9tm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rs9tm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:55:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:55:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:55:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 00:55:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.225,StartTime:2021-10-30 00:55:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 00:55:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://12aeebe5f7c2207515ec9cab475a9aae2e3bd8f70d63848ca15870aa2f21d83d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:10.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2058" for this suite. • [SLOW TEST:25.124 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":20,"skipped":358,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:27.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-8209 Oct 30 00:53:27.805: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:29.809: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:53:31.808: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Oct 30 00:53:31.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 30 00:53:32.103: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Oct 30 00:53:32.103: INFO: stdout: "iptables" Oct 30 00:53:32.103: INFO: proxyMode: iptables Oct 30 00:53:32.110: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 30 00:53:32.112: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-8209 STEP: creating replication controller affinity-nodeport-timeout in namespace services-8209 I1030 00:53:32.124245 36 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-8209, replica count: 3 I1030 00:53:35.176621 36 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:53:38.177512 36 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:53:41.178808 36 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:53:44.179425 36 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:53:47.180106 36 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 00:53:47.190: INFO: Creating new exec pod Oct 30 00:53:54.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Oct 30 00:53:54.442: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-timeout 80\n+ echo hostName\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Oct 30 00:53:54.442: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 00:53:54.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.6.124 80' Oct 30 00:53:54.677: INFO: stderr: "+ nc -v -t -w 2 10.233.6.124 80\n+ echo hostName\nConnection to 10.233.6.124 80 port [tcp/http] succeeded!\n" Oct 30 00:53:54.677: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 00:53:54.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:53:54.920: INFO: rc: 1 Oct 30 00:53:54.920: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:53:55.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:53:56.318: INFO: rc: 1 Oct 30 00:53:56.318: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:53:56.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:53:57.161: INFO: rc: 1 Oct 30 00:53:57.161: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:53:57.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:53:58.423: INFO: rc: 1 Oct 30 00:53:58.423: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:53:58.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:53:59.502: INFO: rc: 1 Oct 30 00:53:59.502: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 + echo hostName nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:53:59.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:01.050: INFO: rc: 1 Oct 30 00:54:01.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:01.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:02.751: INFO: rc: 1 Oct 30 00:54:02.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:02.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:03.418: INFO: rc: 1 Oct 30 00:54:03.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:03.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:04.556: INFO: rc: 1 Oct 30 00:54:04.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:04.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:05.246: INFO: rc: 1 Oct 30 00:54:05.246: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:05.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:06.557: INFO: rc: 1 Oct 30 00:54:06.557: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:06.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:07.176: INFO: rc: 1 Oct 30 00:54:07.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:07.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:08.145: INFO: rc: 1 Oct 30 00:54:08.145: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:08.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:09.573: INFO: rc: 1 Oct 30 00:54:09.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:09.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:10.172: INFO: rc: 1 Oct 30 00:54:10.172: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:10.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:11.467: INFO: rc: 1 Oct 30 00:54:11.467: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:11.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:13.275: INFO: rc: 1 Oct 30 00:54:13.275: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:13.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:14.192: INFO: rc: 1 Oct 30 00:54:14.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:14.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:15.186: INFO: rc: 1 Oct 30 00:54:15.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:15.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:16.176: INFO: rc: 1 Oct 30 00:54:16.176: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:16.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:17.227: INFO: rc: 1 Oct 30 00:54:17.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:17.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:18.178: INFO: rc: 1 Oct 30 00:54:18.178: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:18.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:19.373: INFO: rc: 1 Oct 30 00:54:19.373: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:19.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:20.381: INFO: rc: 1 Oct 30 00:54:20.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:20.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:21.256: INFO: rc: 1 Oct 30 00:54:21.256: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:21.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:22.174: INFO: rc: 1 Oct 30 00:54:22.174: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:22.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:23.206: INFO: rc: 1 Oct 30 00:54:23.206: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:23.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:24.324: INFO: rc: 1 Oct 30 00:54:24.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:24.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:25.155: INFO: rc: 1 Oct 30 00:54:25.155: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:25.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:26.172: INFO: rc: 1 Oct 30 00:54:26.172: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:26.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:27.196: INFO: rc: 1 Oct 30 00:54:27.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:27.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:28.180: INFO: rc: 1 Oct 30 00:54:28.180: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:28.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:29.167: INFO: rc: 1 Oct 30 00:54:29.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:29.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:30.151: INFO: rc: 1 Oct 30 00:54:30.151: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:30.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:31.173: INFO: rc: 1 Oct 30 00:54:31.173: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:31.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:32.180: INFO: rc: 1 Oct 30 00:54:32.180: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:32.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:33.202: INFO: rc: 1 Oct 30 00:54:33.202: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:33.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:34.170: INFO: rc: 1 Oct 30 00:54:34.170: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:34.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:35.163: INFO: rc: 1 Oct 30 00:54:35.163: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:35.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:36.144: INFO: rc: 1 Oct 30 00:54:36.144: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:36.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:37.161: INFO: rc: 1 Oct 30 00:54:37.161: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:37.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:38.164: INFO: rc: 1 Oct 30 00:54:38.164: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:38.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:39.420: INFO: rc: 1 Oct 30 00:54:39.420: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:39.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:40.149: INFO: rc: 1 Oct 30 00:54:40.150: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:40.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:42.291: INFO: rc: 1 Oct 30 00:54:42.291: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:42.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:43.688: INFO: rc: 1 Oct 30 00:54:43.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:43.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:44.473: INFO: rc: 1 Oct 30 00:54:44.474: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:44.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:45.724: INFO: rc: 1 Oct 30 00:54:45.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:45.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:46.320: INFO: rc: 1 Oct 30 00:54:46.320: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:46.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:47.504: INFO: rc: 1 Oct 30 00:54:47.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:47.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:48.162: INFO: rc: 1 Oct 30 00:54:48.162: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:48.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:49.165: INFO: rc: 1 Oct 30 00:54:49.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:49.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:50.253: INFO: rc: 1 Oct 30 00:54:50.253: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:50.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:51.194: INFO: rc: 1 Oct 30 00:54:51.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:51.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:52.174: INFO: rc: 1 Oct 30 00:54:52.174: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:52.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:53.173: INFO: rc: 1 Oct 30 00:54:53.173: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:53.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:54.156: INFO: rc: 1 Oct 30 00:54:54.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:54.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:55.158: INFO: rc: 1 Oct 30 00:54:55.158: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:55.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:56.318: INFO: rc: 1 Oct 30 00:54:56.319: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:56.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:57.335: INFO: rc: 1 Oct 30 00:54:57.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:57.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:58.191: INFO: rc: 1 Oct 30 00:54:58.191: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:58.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:54:59.177: INFO: rc: 1 Oct 30 00:54:59.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:59.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:00.241: INFO: rc: 1 Oct 30 00:55:00.241: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:00.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:01.177: INFO: rc: 1 Oct 30 00:55:01.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:01.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:02.163: INFO: rc: 1 Oct 30 00:55:02.163: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:02.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:03.164: INFO: rc: 1 Oct 30 00:55:03.164: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:03.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:04.516: INFO: rc: 1 Oct 30 00:55:04.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:04.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:05.185: INFO: rc: 1 Oct 30 00:55:05.185: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:05.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:06.163: INFO: rc: 1 Oct 30 00:55:06.163: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:06.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:07.227: INFO: rc: 1 Oct 30 00:55:07.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:07.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:08.184: INFO: rc: 1 Oct 30 00:55:08.184: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:08.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:09.171: INFO: rc: 1 Oct 30 00:55:09.171: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:09.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:10.194: INFO: rc: 1 Oct 30 00:55:10.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:10.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:11.193: INFO: rc: 1 Oct 30 00:55:11.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:11.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:13.699: INFO: rc: 1 Oct 30 00:55:13.699: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:13.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:14.197: INFO: rc: 1 Oct 30 00:55:14.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:14.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:15.247: INFO: rc: 1 Oct 30 00:55:15.247: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:15.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:16.168: INFO: rc: 1 Oct 30 00:55:16.168: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:16.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:17.211: INFO: rc: 1 Oct 30 00:55:17.211: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:17.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:18.188: INFO: rc: 1 Oct 30 00:55:18.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:18.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:19.174: INFO: rc: 1 Oct 30 00:55:19.174: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:19.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:20.146: INFO: rc: 1 Oct 30 00:55:20.146: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:20.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:21.146: INFO: rc: 1 Oct 30 00:55:21.146: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:21.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:22.307: INFO: rc: 1 Oct 30 00:55:22.307: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:22.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:23.159: INFO: rc: 1 Oct 30 00:55:23.159: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:23.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:24.197: INFO: rc: 1 Oct 30 00:55:24.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:24.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:25.167: INFO: rc: 1 Oct 30 00:55:25.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:25.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:26.168: INFO: rc: 1 Oct 30 00:55:26.168: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:26.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:27.190: INFO: rc: 1 Oct 30 00:55:27.190: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:27.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:28.161: INFO: rc: 1 Oct 30 00:55:28.161: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:28.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:29.263: INFO: rc: 1 Oct 30 00:55:29.263: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:29.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:30.304: INFO: rc: 1 Oct 30 00:55:30.304: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:30.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:31.167: INFO: rc: 1 Oct 30 00:55:31.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:31.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:32.207: INFO: rc: 1 Oct 30 00:55:32.207: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:32.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:33.152: INFO: rc: 1 Oct 30 00:55:33.152: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:33.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:34.352: INFO: rc: 1 Oct 30 00:55:34.352: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:34.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:35.430: INFO: rc: 1 Oct 30 00:55:35.430: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:35.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:36.164: INFO: rc: 1 Oct 30 00:55:36.164: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:36.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:37.192: INFO: rc: 1 Oct 30 00:55:37.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:37.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:38.260: INFO: rc: 1 Oct 30 00:55:38.260: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:38.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:39.418: INFO: rc: 1 Oct 30 00:55:39.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:39.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:40.271: INFO: rc: 1 Oct 30 00:55:40.271: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:40.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:41.168: INFO: rc: 1 Oct 30 00:55:41.168: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:41.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:43.255: INFO: rc: 1 Oct 30 00:55:43.255: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:43.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:44.268: INFO: rc: 1 Oct 30 00:55:44.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:44.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:45.194: INFO: rc: 1 Oct 30 00:55:45.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:45.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:47.000: INFO: rc: 1 Oct 30 00:55:47.000: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:47.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:48.318: INFO: rc: 1 Oct 30 00:55:48.318: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:48.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:50.573: INFO: rc: 1 Oct 30 00:55:50.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:50.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:51.202: INFO: rc: 1 Oct 30 00:55:51.202: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:51.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:52.328: INFO: rc: 1 Oct 30 00:55:52.328: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:52.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:53.157: INFO: rc: 1 Oct 30 00:55:53.157: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:53.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:54.193: INFO: rc: 1 Oct 30 00:55:54.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:54.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:55.177: INFO: rc: 1 Oct 30 00:55:55.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:55.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776' Oct 30 00:55:55.536: INFO: rc: 1 Oct 30 00:55:55.536: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8209 exec execpod-affinityqmm95 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30776: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30776 nc: connect to 10.10.190.207 port 30776 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:55.537: FAIL: Unexpected error: <*errors.errorString | 0xc00468aa30>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30776 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30776 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc001b16840, 0x779f8f8, 0xc0023b3760, 0xc000476500) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000204d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000204d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000204d80, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 30 00:55:55.538: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-8209, will wait for the garbage collector to delete the pods Oct 30 00:55:55.612: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 4.29402ms Oct 30 00:55:55.713: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.898738ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-8209". STEP: Found 33 events. Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:27 +0000 UTC - event for kube-proxy-mode-detector: {default-scheduler } Scheduled: Successfully assigned services-8209/kube-proxy-mode-detector to node1 Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:29 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Created: Created container agnhost-container Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:29 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 302.35069ms Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:29 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:30 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Started: Started container agnhost-container Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:32 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-kvpsj Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:32 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-xtwdm Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:32 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-q556p Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:32 +0000 UTC - event for affinity-nodeport-timeout-kvpsj: {default-scheduler } Scheduled: Successfully assigned services-8209/affinity-nodeport-timeout-kvpsj to node2 Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:32 +0000 UTC - event for affinity-nodeport-timeout-q556p: {default-scheduler } Scheduled: Successfully assigned services-8209/affinity-nodeport-timeout-q556p to node1 Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:32 +0000 UTC - event for affinity-nodeport-timeout-xtwdm: {default-scheduler } Scheduled: Successfully assigned services-8209/affinity-nodeport-timeout-xtwdm to node1 Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:32 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Killing: Stopping container agnhost-container Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:34 +0000 UTC - event for affinity-nodeport-timeout-kvpsj: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:35 +0000 UTC - event for affinity-nodeport-timeout-kvpsj: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 566.952605ms Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:35 +0000 UTC - event for affinity-nodeport-timeout-kvpsj: {kubelet node2} Started: Started container affinity-nodeport-timeout Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:35 +0000 UTC - event for affinity-nodeport-timeout-kvpsj: {kubelet node2} Created: Created container affinity-nodeport-timeout Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:40 +0000 UTC - event for affinity-nodeport-timeout-q556p: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 303.070423ms Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:40 +0000 UTC - event for affinity-nodeport-timeout-q556p: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:40 +0000 UTC - event for affinity-nodeport-timeout-xtwdm: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:41 +0000 UTC - event for affinity-nodeport-timeout-q556p: {kubelet node1} Started: Started container affinity-nodeport-timeout Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:41 +0000 UTC - event for affinity-nodeport-timeout-q556p: {kubelet node1} Created: Created container affinity-nodeport-timeout Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:41 +0000 UTC - event for affinity-nodeport-timeout-xtwdm: {kubelet node1} Created: Created container affinity-nodeport-timeout Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:41 +0000 UTC - event for affinity-nodeport-timeout-xtwdm: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 576.135876ms Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:43 +0000 UTC - event for affinity-nodeport-timeout-xtwdm: {kubelet node1} Started: Started container affinity-nodeport-timeout Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:47 +0000 UTC - event for execpod-affinityqmm95: {default-scheduler } Scheduled: Successfully assigned services-8209/execpod-affinityqmm95 to node1 Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:49 +0000 UTC - event for execpod-affinityqmm95: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:50 +0000 UTC - event for execpod-affinityqmm95: {kubelet node1} Created: Created container agnhost-container Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:50 +0000 UTC - event for execpod-affinityqmm95: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 779.043085ms Oct 30 00:56:13.231: INFO: At 2021-10-30 00:53:51 +0000 UTC - event for execpod-affinityqmm95: {kubelet node1} Started: Started container agnhost-container Oct 30 00:56:13.231: INFO: At 2021-10-30 00:55:55 +0000 UTC - event for affinity-nodeport-timeout-kvpsj: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Oct 30 00:56:13.231: INFO: At 2021-10-30 00:55:55 +0000 UTC - event for affinity-nodeport-timeout-q556p: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout Oct 30 00:56:13.231: INFO: At 2021-10-30 00:55:55 +0000 UTC - event for affinity-nodeport-timeout-xtwdm: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout Oct 30 00:56:13.231: INFO: At 2021-10-30 00:55:55 +0000 UTC - event for execpod-affinityqmm95: {kubelet node1} Killing: Stopping container agnhost-container Oct 30 00:56:13.234: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 00:56:13.234: INFO: Oct 30 00:56:13.239: INFO: Logging node info for node master1 Oct 30 00:56:13.242: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 64799 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:06 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:06 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:06 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:56:06 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:56:13.242: INFO: Logging kubelet events for node master1 Oct 30 00:56:13.244: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 00:56:13.274: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.274: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 00:56:13.274: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.274: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 30 00:56:13.274: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:56:13.274: INFO: Init container install-cni ready: true, restart count 0 Oct 30 00:56:13.274: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 00:56:13.274: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.274: INFO: Container kube-multus ready: true, restart count 1 Oct 30 00:56:13.274: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.274: INFO: Container kube-scheduler ready: true, restart count 0 Oct 30 00:56:13.274: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.274: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 00:56:13.274: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.274: INFO: Container coredns ready: true, restart count 1 Oct 30 00:56:13.274: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:13.274: INFO: Container docker-registry ready: true, restart count 0 Oct 30 00:56:13.274: INFO: Container nginx ready: true, restart count 0 Oct 30 00:56:13.274: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:13.274: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:56:13.274: INFO: Container node-exporter ready: true, restart count 0 W1030 00:56:13.287705 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:56:13.355: INFO: Latency metrics for node master1 Oct 30 00:56:13.355: INFO: Logging node info for node master2 Oct 30 00:56:13.358: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 64772 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:03 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:03 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:03 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:56:03 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:56:13.359: INFO: Logging kubelet events for node master2 Oct 30 00:56:13.361: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 00:56:13.388: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:13.389: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:56:13.389: INFO: Container node-exporter ready: true, restart count 0 Oct 30 00:56:13.389: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.389: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 00:56:13.389: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.389: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 00:56:13.389: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.389: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 00:56:13.389: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.389: INFO: Container kube-proxy ready: true, restart count 2 Oct 30 00:56:13.389: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:56:13.389: INFO: Init container install-cni ready: true, restart count 2 Oct 30 00:56:13.389: INFO: Container kube-flannel ready: true, restart count 1 Oct 30 00:56:13.389: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.389: INFO: Container kube-multus ready: true, restart count 1 W1030 00:56:13.404055 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:56:13.460: INFO: Latency metrics for node master2 Oct 30 00:56:13.460: INFO: Logging node info for node master3 Oct 30 00:56:13.463: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 65409 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:11 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:11 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:11 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:56:11 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:56:13.463: INFO: Logging kubelet events for node master3 Oct 30 00:56:13.465: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 00:56:13.481: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.481: INFO: Container kube-multus ready: true, restart count 1 Oct 30 00:56:13.481: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.481: INFO: Container coredns ready: true, restart count 1 Oct 30 00:56:13.481: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:13.481: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:56:13.481: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 00:56:13.481: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:13.481: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:56:13.481: INFO: Container node-exporter ready: true, restart count 0 Oct 30 00:56:13.481: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.481: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 30 00:56:13.481: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.481: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 00:56:13.481: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:56:13.481: INFO: Init container install-cni ready: true, restart count 2 Oct 30 00:56:13.481: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 00:56:13.481: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.481: INFO: Container nfd-controller ready: true, restart count 0 Oct 30 00:56:13.481: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.481: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 00:56:13.481: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.481: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 00:56:13.481: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.481: INFO: Container autoscaler ready: true, restart count 1 W1030 00:56:13.499690 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:56:13.583: INFO: Latency metrics for node master3 Oct 30 00:56:13.583: INFO: Logging node info for node node1 Oct 30 00:56:13.586: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 65388 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:11 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:11 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:11 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:56:11 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:56:13.587: INFO: Logging kubelet events for node node1 Oct 30 00:56:13.589: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 00:56:13.696: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:56:13.696: INFO: Init container install-cni ready: true, restart count 2 Oct 30 00:56:13.696: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 00:56:13.696: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.696: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 00:56:13.696: INFO: pod-projected-configmaps-d1e73ed0-76ac-472a-b9ca-90556deadb5f started at 2021-10-30 00:56:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.696: INFO: Container agnhost-container ready: false, restart count 0 Oct 30 00:56:13.696: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.696: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 00:56:13.696: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 00:56:13.696: INFO: Container collectd ready: true, restart count 0 Oct 30 00:56:13.696: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 00:56:13.696: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 00:56:13.696: INFO: downwardapi-volume-778db969-1298-4325-8c64-922493e8b46d started at 2021-10-30 00:56:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.696: INFO: Container client-container ready: false, restart count 0 Oct 30 00:56:13.696: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 00:56:13.696: INFO: Container config-reloader ready: true, restart count 0 Oct 30 00:56:13.696: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 00:56:13.696: INFO: Container grafana ready: true, restart count 0 Oct 30 00:56:13.696: INFO: Container prometheus ready: true, restart count 1 Oct 30 00:56:13.696: INFO: var-expansion-d3417a8e-6a6c-4fc4-ac31-d9105bbf4375 started at 2021-10-30 00:53:46 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.696: INFO: Container dapi-container ready: true, restart count 0 Oct 30 00:56:13.696: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.696: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 00:56:13.696: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.696: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 00:56:13.696: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 00:56:13.696: INFO: Container discover ready: false, restart count 0 Oct 30 00:56:13.696: INFO: Container init ready: false, restart count 0 Oct 30 00:56:13.696: INFO: Container install ready: false, restart count 0 Oct 30 00:56:13.696: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:13.696: INFO: Container nodereport ready: true, restart count 0 Oct 30 00:56:13.696: INFO: Container reconcile ready: true, restart count 0 Oct 30 00:56:13.696: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:13.696: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:56:13.696: INFO: Container node-exporter ready: true, restart count 0 Oct 30 00:56:13.696: INFO: affinity-nodeport-h9dhq started at 2021-10-30 00:53:58 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.696: INFO: Container affinity-nodeport ready: true, restart count 0 Oct 30 00:56:13.696: INFO: affinity-nodeport-lpc5m started at 2021-10-30 00:53:58 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.696: INFO: Container affinity-nodeport ready: true, restart count 0 Oct 30 00:56:13.696: INFO: pod-logs-websocket-98f278b6-8ea1-45e2-90f8-ab77e45b5f71 started at 2021-10-30 00:55:29 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.696: INFO: Container main ready: false, restart count 0 Oct 30 00:56:13.696: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.696: INFO: Container kube-multus ready: true, restart count 1 Oct 30 00:56:13.696: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:13.696: INFO: Container nfd-worker ready: true, restart count 0 W1030 00:56:13.711471 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:56:14.084: INFO: Latency metrics for node node1 Oct 30 00:56:14.084: INFO: Logging node info for node node2 Oct 30 00:56:14.087: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 65226 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:09 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:09 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:09 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:56:09 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:56:14.088: INFO: Logging kubelet events for node node2 Oct 30 00:56:14.090: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 00:56:14.152: INFO: test-pod started at 2021-10-30 00:54:25 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container webserver ready: true, restart count 0 Oct 30 00:56:14.152: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 00:56:14.152: INFO: ss2-2 started at 2021-10-30 00:55:53 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container webserver ready: true, restart count 0 Oct 30 00:56:14.152: INFO: test-rollover-deployment-98c5f4599-2jd6s started at 2021-10-30 00:55:56 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container agnhost ready: true, restart count 0 Oct 30 00:56:14.152: INFO: svc-latency-rc-hn7zn started at 2021-10-30 00:56:03 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container svc-latency-rc ready: true, restart count 0 Oct 30 00:56:14.152: INFO: ss2-1 started at 2021-10-30 00:56:13 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container webserver ready: false, restart count 0 Oct 30 00:56:14.152: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Init container install-cni ready: true, restart count 2 Oct 30 00:56:14.152: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 00:56:14.152: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container kube-multus ready: true, restart count 1 Oct 30 00:56:14.152: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 00:56:14.152: INFO: ss2-0 started at 2021-10-30 00:55:12 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container webserver ready: true, restart count 0 Oct 30 00:56:14.152: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 00:56:14.152: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 00:56:14.152: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 00:56:14.152: INFO: Container discover ready: false, restart count 0 Oct 30 00:56:14.152: INFO: Container init ready: false, restart count 0 Oct 30 00:56:14.152: INFO: Container install ready: false, restart count 0 Oct 30 00:56:14.152: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 00:56:14.152: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:14.152: INFO: Container nodereport ready: true, restart count 0 Oct 30 00:56:14.152: INFO: Container reconcile ready: true, restart count 0 Oct 30 00:56:14.152: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:14.152: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:56:14.152: INFO: Container node-exporter ready: true, restart count 0 Oct 30 00:56:14.152: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container tas-extender ready: true, restart count 0 Oct 30 00:56:14.152: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 00:56:14.152: INFO: Container collectd ready: true, restart count 0 Oct 30 00:56:14.152: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 00:56:14.152: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 00:56:14.152: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 00:56:14.152: INFO: affinity-nodeport-jkk2m started at 2021-10-30 00:53:58 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container affinity-nodeport ready: true, restart count 0 Oct 30 00:56:14.152: INFO: execpod-affinitygk2qr started at 2021-10-30 00:54:07 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 00:56:14.152: INFO: liveness-00176d17-9bb9-4e06-870f-696107c114d2 started at 2021-10-30 00:55:00 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:14.152: INFO: Container agnhost-container ready: true, restart count 3 W1030 00:56:14.179960 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:56:14.852: INFO: Latency metrics for node node2 Oct 30 00:56:14.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8209" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [167.090 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:55:55.537: Unexpected error: <*errors.errorString | 0xc00468aa30>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30776 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30776 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":101,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:44.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-7068 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-7068 Oct 30 00:55:44.919: INFO: Found 0 stateful pods, waiting for 1 Oct 30 00:55:54.922: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 30 00:55:54.940: INFO: Deleting all statefulset in ns statefulset-7068 Oct 30 00:55:54.942: INFO: Scaling statefulset ss to 0 Oct 30 00:56:14.958: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 00:56:14.965: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:14.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7068" for this suite. • [SLOW TEST:30.100 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":14,"skipped":162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:09.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-36459cee-95d7-4088-bb52-dec5128ef8b5 STEP: Creating a pod to test consume configMaps Oct 30 00:56:09.219: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d1e73ed0-76ac-472a-b9ca-90556deadb5f" in namespace "projected-1095" to be "Succeeded or Failed" Oct 30 00:56:09.233: INFO: Pod "pod-projected-configmaps-d1e73ed0-76ac-472a-b9ca-90556deadb5f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.312941ms Oct 30 00:56:11.236: INFO: Pod "pod-projected-configmaps-d1e73ed0-76ac-472a-b9ca-90556deadb5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017448517s Oct 30 00:56:13.240: INFO: Pod "pod-projected-configmaps-d1e73ed0-76ac-472a-b9ca-90556deadb5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021133668s Oct 30 00:56:15.244: INFO: Pod "pod-projected-configmaps-d1e73ed0-76ac-472a-b9ca-90556deadb5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024749442s STEP: Saw pod success Oct 30 00:56:15.244: INFO: Pod "pod-projected-configmaps-d1e73ed0-76ac-472a-b9ca-90556deadb5f" satisfied condition "Succeeded or Failed" Oct 30 00:56:15.246: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-d1e73ed0-76ac-472a-b9ca-90556deadb5f container agnhost-container: STEP: delete the pod Oct 30 00:56:15.263: INFO: Waiting for pod pod-projected-configmaps-d1e73ed0-76ac-472a-b9ca-90556deadb5f to disappear Oct 30 00:56:15.264: INFO: Pod pod-projected-configmaps-d1e73ed0-76ac-472a-b9ca-90556deadb5f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:15.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1095" for this suite. • [SLOW TEST:6.088 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":321,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:03.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:56:03.071: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2660 I1030 00:56:03.091230 26 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2660, replica count: 1 I1030 00:56:04.142629 26 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:56:05.143652 26 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:56:06.146841 26 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:56:07.147912 26 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 00:56:07.256: INFO: Created: latency-svc-xfssb Oct 30 00:56:07.262: INFO: Got endpoints: latency-svc-xfssb [13.902409ms] Oct 30 00:56:07.267: INFO: Created: latency-svc-78fnb Oct 30 00:56:07.270: INFO: Got endpoints: latency-svc-78fnb [7.931457ms] Oct 30 00:56:07.270: INFO: Created: latency-svc-pq58t Oct 30 00:56:07.272: INFO: Got endpoints: latency-svc-pq58t [9.969489ms] Oct 30 00:56:07.273: INFO: Created: latency-svc-mc54q Oct 30 00:56:07.276: INFO: Got endpoints: latency-svc-mc54q [13.431034ms] Oct 30 00:56:07.276: INFO: Created: latency-svc-28gw7 Oct 30 00:56:07.278: INFO: Got endpoints: latency-svc-28gw7 [15.832531ms] Oct 30 00:56:07.279: INFO: Created: latency-svc-gl7hz Oct 30 00:56:07.281: INFO: Got endpoints: latency-svc-gl7hz [18.727509ms] Oct 30 00:56:07.282: INFO: Created: latency-svc-fk2ck Oct 30 00:56:07.284: INFO: Got endpoints: latency-svc-fk2ck [22.080284ms] Oct 30 00:56:07.284: INFO: Created: latency-svc-f2p99 Oct 30 00:56:07.287: INFO: Got endpoints: latency-svc-f2p99 [24.495736ms] Oct 30 00:56:07.287: INFO: Created: latency-svc-mk5lv Oct 30 00:56:07.289: INFO: Got endpoints: latency-svc-mk5lv [26.377532ms] Oct 30 00:56:07.289: INFO: Created: latency-svc-wzbfh Oct 30 00:56:07.292: INFO: Got endpoints: latency-svc-wzbfh [29.658037ms] Oct 30 00:56:07.292: INFO: Created: latency-svc-cxl95 Oct 30 00:56:07.294: INFO: Got endpoints: latency-svc-cxl95 [32.106214ms] Oct 30 00:56:07.295: INFO: Created: latency-svc-slr6k Oct 30 00:56:07.297: INFO: Got endpoints: latency-svc-slr6k [34.319212ms] Oct 30 00:56:07.298: INFO: Created: latency-svc-qdm7p Oct 30 00:56:07.300: INFO: Created: latency-svc-jpg62 Oct 30 00:56:07.300: INFO: Got endpoints: latency-svc-qdm7p [37.489396ms] Oct 30 00:56:07.302: INFO: Got endpoints: latency-svc-jpg62 [39.670988ms] Oct 30 00:56:07.303: INFO: Created: latency-svc-26xfh Oct 30 00:56:07.305: INFO: Got endpoints: latency-svc-26xfh [42.723124ms] Oct 30 00:56:07.306: INFO: Created: latency-svc-wmp2r Oct 30 00:56:07.309: INFO: Created: latency-svc-lsw79 Oct 30 00:56:07.309: INFO: Got endpoints: latency-svc-wmp2r [46.516233ms] Oct 30 00:56:07.311: INFO: Got endpoints: latency-svc-lsw79 [41.568509ms] Oct 30 00:56:07.312: INFO: Created: latency-svc-4rvz5 Oct 30 00:56:07.314: INFO: Got endpoints: latency-svc-4rvz5 [41.918419ms] Oct 30 00:56:07.315: INFO: Created: latency-svc-gkzlh Oct 30 00:56:07.317: INFO: Got endpoints: latency-svc-gkzlh [41.044345ms] Oct 30 00:56:07.317: INFO: Created: latency-svc-t8gns Oct 30 00:56:07.319: INFO: Got endpoints: latency-svc-t8gns [41.066059ms] Oct 30 00:56:07.320: INFO: Created: latency-svc-n8s6d Oct 30 00:56:07.322: INFO: Got endpoints: latency-svc-n8s6d [41.140129ms] Oct 30 00:56:07.323: INFO: Created: latency-svc-fdhrm Oct 30 00:56:07.325: INFO: Got endpoints: latency-svc-fdhrm [41.173687ms] Oct 30 00:56:07.325: INFO: Created: latency-svc-zgm88 Oct 30 00:56:07.328: INFO: Got endpoints: latency-svc-zgm88 [8.131687ms] Oct 30 00:56:07.329: INFO: Created: latency-svc-2xqgc Oct 30 00:56:07.331: INFO: Got endpoints: latency-svc-2xqgc [43.965091ms] Oct 30 00:56:07.332: INFO: Created: latency-svc-pf44l Oct 30 00:56:07.334: INFO: Got endpoints: latency-svc-pf44l [44.843342ms] Oct 30 00:56:07.335: INFO: Created: latency-svc-w7hr5 Oct 30 00:56:07.337: INFO: Got endpoints: latency-svc-w7hr5 [44.825243ms] Oct 30 00:56:07.338: INFO: Created: latency-svc-469b6 Oct 30 00:56:07.339: INFO: Got endpoints: latency-svc-469b6 [45.029134ms] Oct 30 00:56:07.341: INFO: Created: latency-svc-dhvl8 Oct 30 00:56:07.343: INFO: Created: latency-svc-jqrsd Oct 30 00:56:07.343: INFO: Got endpoints: latency-svc-dhvl8 [46.58675ms] Oct 30 00:56:07.345: INFO: Got endpoints: latency-svc-jqrsd [44.858424ms] Oct 30 00:56:07.346: INFO: Created: latency-svc-2tcm4 Oct 30 00:56:07.348: INFO: Got endpoints: latency-svc-2tcm4 [45.652865ms] Oct 30 00:56:07.349: INFO: Created: latency-svc-pn52t Oct 30 00:56:07.351: INFO: Got endpoints: latency-svc-pn52t [46.47876ms] Oct 30 00:56:07.351: INFO: Created: latency-svc-tfdlm Oct 30 00:56:07.354: INFO: Created: latency-svc-gnrkv Oct 30 00:56:07.357: INFO: Created: latency-svc-gkqqn Oct 30 00:56:07.359: INFO: Got endpoints: latency-svc-tfdlm [50.327517ms] Oct 30 00:56:07.359: INFO: Created: latency-svc-gbkdm Oct 30 00:56:07.362: INFO: Created: latency-svc-sdwqg Oct 30 00:56:07.364: INFO: Created: latency-svc-nwc4g Oct 30 00:56:07.366: INFO: Created: latency-svc-vtd6v Oct 30 00:56:07.370: INFO: Created: latency-svc-h546m Oct 30 00:56:07.372: INFO: Created: latency-svc-wptsb Oct 30 00:56:07.374: INFO: Created: latency-svc-fz7vk Oct 30 00:56:07.377: INFO: Created: latency-svc-g99mc Oct 30 00:56:07.380: INFO: Created: latency-svc-h2stm Oct 30 00:56:07.382: INFO: Created: latency-svc-gx6jb Oct 30 00:56:07.385: INFO: Created: latency-svc-fvsjn Oct 30 00:56:07.388: INFO: Created: latency-svc-4sj5t Oct 30 00:56:07.390: INFO: Created: latency-svc-bhllz Oct 30 00:56:07.410: INFO: Got endpoints: latency-svc-gnrkv [98.211823ms] Oct 30 00:56:07.415: INFO: Created: latency-svc-sh2cg Oct 30 00:56:07.460: INFO: Got endpoints: latency-svc-gkqqn [146.108624ms] Oct 30 00:56:07.465: INFO: Created: latency-svc-wfm2l Oct 30 00:56:07.510: INFO: Got endpoints: latency-svc-gbkdm [193.52781ms] Oct 30 00:56:07.515: INFO: Created: latency-svc-46zsg Oct 30 00:56:07.560: INFO: Got endpoints: latency-svc-sdwqg [237.806607ms] Oct 30 00:56:07.566: INFO: Created: latency-svc-zmn7w Oct 30 00:56:07.609: INFO: Got endpoints: latency-svc-nwc4g [284.026143ms] Oct 30 00:56:07.615: INFO: Created: latency-svc-dnwnm Oct 30 00:56:07.660: INFO: Got endpoints: latency-svc-vtd6v [332.375416ms] Oct 30 00:56:07.665: INFO: Created: latency-svc-v7ztt Oct 30 00:56:07.710: INFO: Got endpoints: latency-svc-h546m [379.074749ms] Oct 30 00:56:07.715: INFO: Created: latency-svc-8n9kh Oct 30 00:56:07.759: INFO: Got endpoints: latency-svc-wptsb [425.678455ms] Oct 30 00:56:07.765: INFO: Created: latency-svc-5d8lc Oct 30 00:56:07.810: INFO: Got endpoints: latency-svc-fz7vk [473.459936ms] Oct 30 00:56:07.815: INFO: Created: latency-svc-l4ppj Oct 30 00:56:07.860: INFO: Got endpoints: latency-svc-g99mc [520.433474ms] Oct 30 00:56:07.865: INFO: Created: latency-svc-hlqs6 Oct 30 00:56:07.910: INFO: Got endpoints: latency-svc-h2stm [566.124425ms] Oct 30 00:56:07.915: INFO: Created: latency-svc-24tl2 Oct 30 00:56:07.960: INFO: Got endpoints: latency-svc-gx6jb [614.732894ms] Oct 30 00:56:07.965: INFO: Created: latency-svc-dj8xt Oct 30 00:56:08.010: INFO: Got endpoints: latency-svc-fvsjn [661.744537ms] Oct 30 00:56:08.016: INFO: Created: latency-svc-9wcbd Oct 30 00:56:08.059: INFO: Got endpoints: latency-svc-4sj5t [707.796167ms] Oct 30 00:56:08.065: INFO: Created: latency-svc-q4q6t Oct 30 00:56:08.110: INFO: Got endpoints: latency-svc-bhllz [750.177916ms] Oct 30 00:56:08.115: INFO: Created: latency-svc-7bz8f Oct 30 00:56:08.159: INFO: Got endpoints: latency-svc-sh2cg [749.424419ms] Oct 30 00:56:08.164: INFO: Created: latency-svc-8dvt2 Oct 30 00:56:08.210: INFO: Got endpoints: latency-svc-wfm2l [749.254091ms] Oct 30 00:56:08.215: INFO: Created: latency-svc-28n7w Oct 30 00:56:08.259: INFO: Got endpoints: latency-svc-46zsg [749.176401ms] Oct 30 00:56:08.265: INFO: Created: latency-svc-frcr4 Oct 30 00:56:08.309: INFO: Got endpoints: latency-svc-zmn7w [749.50967ms] Oct 30 00:56:08.315: INFO: Created: latency-svc-4wk45 Oct 30 00:56:08.359: INFO: Got endpoints: latency-svc-dnwnm [749.35879ms] Oct 30 00:56:08.364: INFO: Created: latency-svc-hdpps Oct 30 00:56:08.409: INFO: Got endpoints: latency-svc-v7ztt [749.20395ms] Oct 30 00:56:08.415: INFO: Created: latency-svc-cttgq Oct 30 00:56:08.460: INFO: Got endpoints: latency-svc-8n9kh [749.842154ms] Oct 30 00:56:08.465: INFO: Created: latency-svc-gsdmk Oct 30 00:56:08.510: INFO: Got endpoints: latency-svc-5d8lc [750.644314ms] Oct 30 00:56:08.516: INFO: Created: latency-svc-zj8kn Oct 30 00:56:08.560: INFO: Got endpoints: latency-svc-l4ppj [749.375543ms] Oct 30 00:56:08.566: INFO: Created: latency-svc-ffq62 Oct 30 00:56:08.610: INFO: Got endpoints: latency-svc-hlqs6 [750.453111ms] Oct 30 00:56:08.616: INFO: Created: latency-svc-t7fbr Oct 30 00:56:08.660: INFO: Got endpoints: latency-svc-24tl2 [750.622698ms] Oct 30 00:56:08.666: INFO: Created: latency-svc-kb8c8 Oct 30 00:56:08.710: INFO: Got endpoints: latency-svc-dj8xt [750.642589ms] Oct 30 00:56:08.716: INFO: Created: latency-svc-hcnm2 Oct 30 00:56:08.760: INFO: Got endpoints: latency-svc-9wcbd [750.444093ms] Oct 30 00:56:08.766: INFO: Created: latency-svc-zg28b Oct 30 00:56:08.811: INFO: Got endpoints: latency-svc-q4q6t [751.130335ms] Oct 30 00:56:08.817: INFO: Created: latency-svc-6p7dp Oct 30 00:56:08.859: INFO: Got endpoints: latency-svc-7bz8f [749.568368ms] Oct 30 00:56:08.864: INFO: Created: latency-svc-xvxpp Oct 30 00:56:08.909: INFO: Got endpoints: latency-svc-8dvt2 [750.188079ms] Oct 30 00:56:08.914: INFO: Created: latency-svc-q4xt2 Oct 30 00:56:08.959: INFO: Got endpoints: latency-svc-28n7w [749.287794ms] Oct 30 00:56:08.967: INFO: Created: latency-svc-njpfl Oct 30 00:56:09.010: INFO: Got endpoints: latency-svc-frcr4 [750.454126ms] Oct 30 00:56:09.016: INFO: Created: latency-svc-8hvrw Oct 30 00:56:09.060: INFO: Got endpoints: latency-svc-4wk45 [750.427775ms] Oct 30 00:56:09.065: INFO: Created: latency-svc-w6sqz Oct 30 00:56:09.110: INFO: Got endpoints: latency-svc-hdpps [750.830204ms] Oct 30 00:56:09.116: INFO: Created: latency-svc-gsmzj Oct 30 00:56:09.160: INFO: Got endpoints: latency-svc-cttgq [750.29505ms] Oct 30 00:56:09.165: INFO: Created: latency-svc-ckw6c Oct 30 00:56:09.209: INFO: Got endpoints: latency-svc-gsdmk [748.644183ms] Oct 30 00:56:09.216: INFO: Created: latency-svc-wshnh Oct 30 00:56:09.261: INFO: Got endpoints: latency-svc-zj8kn [750.513932ms] Oct 30 00:56:09.268: INFO: Created: latency-svc-27tqf Oct 30 00:56:09.310: INFO: Got endpoints: latency-svc-ffq62 [750.272754ms] Oct 30 00:56:09.315: INFO: Created: latency-svc-b6xnf Oct 30 00:56:09.361: INFO: Got endpoints: latency-svc-t7fbr [750.178407ms] Oct 30 00:56:09.367: INFO: Created: latency-svc-kxl68 Oct 30 00:56:09.410: INFO: Got endpoints: latency-svc-kb8c8 [749.407289ms] Oct 30 00:56:09.415: INFO: Created: latency-svc-td6mt Oct 30 00:56:09.459: INFO: Got endpoints: latency-svc-hcnm2 [748.488791ms] Oct 30 00:56:09.465: INFO: Created: latency-svc-pmgz9 Oct 30 00:56:09.509: INFO: Got endpoints: latency-svc-zg28b [748.884462ms] Oct 30 00:56:09.515: INFO: Created: latency-svc-qb9mr Oct 30 00:56:09.560: INFO: Got endpoints: latency-svc-6p7dp [749.412504ms] Oct 30 00:56:09.565: INFO: Created: latency-svc-7b8b4 Oct 30 00:56:09.609: INFO: Got endpoints: latency-svc-xvxpp [750.126708ms] Oct 30 00:56:09.615: INFO: Created: latency-svc-9zgk8 Oct 30 00:56:09.660: INFO: Got endpoints: latency-svc-q4xt2 [750.08153ms] Oct 30 00:56:09.665: INFO: Created: latency-svc-hd7qp Oct 30 00:56:09.709: INFO: Got endpoints: latency-svc-njpfl [750.069146ms] Oct 30 00:56:09.715: INFO: Created: latency-svc-mz6ld Oct 30 00:56:09.760: INFO: Got endpoints: latency-svc-8hvrw [749.840712ms] Oct 30 00:56:09.765: INFO: Created: latency-svc-vttnd Oct 30 00:56:09.809: INFO: Got endpoints: latency-svc-w6sqz [748.825754ms] Oct 30 00:56:09.815: INFO: Created: latency-svc-4w6fz Oct 30 00:56:09.860: INFO: Got endpoints: latency-svc-gsmzj [750.285293ms] Oct 30 00:56:09.866: INFO: Created: latency-svc-z97ws Oct 30 00:56:09.910: INFO: Got endpoints: latency-svc-ckw6c [750.401819ms] Oct 30 00:56:09.916: INFO: Created: latency-svc-5p2gj Oct 30 00:56:09.960: INFO: Got endpoints: latency-svc-wshnh [751.226065ms] Oct 30 00:56:09.966: INFO: Created: latency-svc-pl6xc Oct 30 00:56:10.010: INFO: Got endpoints: latency-svc-27tqf [749.016759ms] Oct 30 00:56:10.015: INFO: Created: latency-svc-4gsmx Oct 30 00:56:10.059: INFO: Got endpoints: latency-svc-b6xnf [749.33739ms] Oct 30 00:56:10.066: INFO: Created: latency-svc-fb55t Oct 30 00:56:10.109: INFO: Got endpoints: latency-svc-kxl68 [748.591193ms] Oct 30 00:56:10.115: INFO: Created: latency-svc-c46vb Oct 30 00:56:10.159: INFO: Got endpoints: latency-svc-td6mt [749.772283ms] Oct 30 00:56:10.165: INFO: Created: latency-svc-gkdhg Oct 30 00:56:10.210: INFO: Got endpoints: latency-svc-pmgz9 [750.921905ms] Oct 30 00:56:10.215: INFO: Created: latency-svc-6664w Oct 30 00:56:10.260: INFO: Got endpoints: latency-svc-qb9mr [751.128598ms] Oct 30 00:56:10.266: INFO: Created: latency-svc-c9bs4 Oct 30 00:56:10.310: INFO: Got endpoints: latency-svc-7b8b4 [749.605895ms] Oct 30 00:56:10.315: INFO: Created: latency-svc-h9ghs Oct 30 00:56:10.359: INFO: Got endpoints: latency-svc-9zgk8 [749.794205ms] Oct 30 00:56:10.367: INFO: Created: latency-svc-785zk Oct 30 00:56:10.410: INFO: Got endpoints: latency-svc-hd7qp [750.250214ms] Oct 30 00:56:10.415: INFO: Created: latency-svc-f4rxr Oct 30 00:56:10.460: INFO: Got endpoints: latency-svc-mz6ld [751.077635ms] Oct 30 00:56:10.466: INFO: Created: latency-svc-tnk9w Oct 30 00:56:10.510: INFO: Got endpoints: latency-svc-vttnd [750.511751ms] Oct 30 00:56:10.516: INFO: Created: latency-svc-bww2j Oct 30 00:56:10.559: INFO: Got endpoints: latency-svc-4w6fz [750.424309ms] Oct 30 00:56:10.566: INFO: Created: latency-svc-m25qp Oct 30 00:56:10.610: INFO: Got endpoints: latency-svc-z97ws [750.202567ms] Oct 30 00:56:10.615: INFO: Created: latency-svc-n2nrc Oct 30 00:56:10.661: INFO: Got endpoints: latency-svc-5p2gj [750.657134ms] Oct 30 00:56:10.667: INFO: Created: latency-svc-p79q5 Oct 30 00:56:10.710: INFO: Got endpoints: latency-svc-pl6xc [750.595247ms] Oct 30 00:56:10.716: INFO: Created: latency-svc-jfvsv Oct 30 00:56:10.759: INFO: Got endpoints: latency-svc-4gsmx [749.010404ms] Oct 30 00:56:10.764: INFO: Created: latency-svc-j77v2 Oct 30 00:56:10.810: INFO: Got endpoints: latency-svc-fb55t [750.668634ms] Oct 30 00:56:10.816: INFO: Created: latency-svc-tb46h Oct 30 00:56:10.860: INFO: Got endpoints: latency-svc-c46vb [750.189861ms] Oct 30 00:56:10.867: INFO: Created: latency-svc-4mjfv Oct 30 00:56:10.913: INFO: Got endpoints: latency-svc-gkdhg [753.44036ms] Oct 30 00:56:10.926: INFO: Created: latency-svc-9vkmk Oct 30 00:56:10.959: INFO: Got endpoints: latency-svc-6664w [749.643048ms] Oct 30 00:56:10.965: INFO: Created: latency-svc-qtcql Oct 30 00:56:11.010: INFO: Got endpoints: latency-svc-c9bs4 [749.326317ms] Oct 30 00:56:11.015: INFO: Created: latency-svc-tzcmx Oct 30 00:56:11.059: INFO: Got endpoints: latency-svc-h9ghs [749.615172ms] Oct 30 00:56:11.066: INFO: Created: latency-svc-tgs6m Oct 30 00:56:11.109: INFO: Got endpoints: latency-svc-785zk [750.218059ms] Oct 30 00:56:11.116: INFO: Created: latency-svc-8xqj7 Oct 30 00:56:11.160: INFO: Got endpoints: latency-svc-f4rxr [749.640363ms] Oct 30 00:56:11.165: INFO: Created: latency-svc-rptwv Oct 30 00:56:11.209: INFO: Got endpoints: latency-svc-tnk9w [748.921747ms] Oct 30 00:56:11.215: INFO: Created: latency-svc-mrtb5 Oct 30 00:56:11.261: INFO: Got endpoints: latency-svc-bww2j [750.96635ms] Oct 30 00:56:11.267: INFO: Created: latency-svc-6gt8t Oct 30 00:56:11.310: INFO: Got endpoints: latency-svc-m25qp [750.858119ms] Oct 30 00:56:11.317: INFO: Created: latency-svc-fqcq5 Oct 30 00:56:11.361: INFO: Got endpoints: latency-svc-n2nrc [750.535365ms] Oct 30 00:56:11.368: INFO: Created: latency-svc-42sj2 Oct 30 00:56:11.410: INFO: Got endpoints: latency-svc-p79q5 [748.962164ms] Oct 30 00:56:11.415: INFO: Created: latency-svc-xr6z7 Oct 30 00:56:11.460: INFO: Got endpoints: latency-svc-jfvsv [749.281668ms] Oct 30 00:56:11.465: INFO: Created: latency-svc-8gn5v Oct 30 00:56:11.510: INFO: Got endpoints: latency-svc-j77v2 [750.956687ms] Oct 30 00:56:11.515: INFO: Created: latency-svc-n2m8q Oct 30 00:56:11.559: INFO: Got endpoints: latency-svc-tb46h [748.498801ms] Oct 30 00:56:11.564: INFO: Created: latency-svc-b7fh5 Oct 30 00:56:11.609: INFO: Got endpoints: latency-svc-4mjfv [749.436695ms] Oct 30 00:56:11.615: INFO: Created: latency-svc-qkddm Oct 30 00:56:11.660: INFO: Got endpoints: latency-svc-9vkmk [747.004121ms] Oct 30 00:56:11.665: INFO: Created: latency-svc-g8h4f Oct 30 00:56:11.710: INFO: Got endpoints: latency-svc-qtcql [750.595293ms] Oct 30 00:56:11.716: INFO: Created: latency-svc-fjhhc Oct 30 00:56:11.759: INFO: Got endpoints: latency-svc-tzcmx [749.646685ms] Oct 30 00:56:11.765: INFO: Created: latency-svc-ksvtd Oct 30 00:56:11.811: INFO: Got endpoints: latency-svc-tgs6m [751.276879ms] Oct 30 00:56:11.816: INFO: Created: latency-svc-9fspj Oct 30 00:56:11.860: INFO: Got endpoints: latency-svc-8xqj7 [750.343121ms] Oct 30 00:56:11.865: INFO: Created: latency-svc-clsqz Oct 30 00:56:11.909: INFO: Got endpoints: latency-svc-rptwv [749.062505ms] Oct 30 00:56:11.914: INFO: Created: latency-svc-cdqkj Oct 30 00:56:11.960: INFO: Got endpoints: latency-svc-mrtb5 [750.449139ms] Oct 30 00:56:11.965: INFO: Created: latency-svc-wv7cm Oct 30 00:56:12.010: INFO: Got endpoints: latency-svc-6gt8t [748.453323ms] Oct 30 00:56:12.015: INFO: Created: latency-svc-gcbhx Oct 30 00:56:12.060: INFO: Got endpoints: latency-svc-fqcq5 [749.51571ms] Oct 30 00:56:12.066: INFO: Created: latency-svc-pdtbv Oct 30 00:56:12.110: INFO: Got endpoints: latency-svc-42sj2 [749.536703ms] Oct 30 00:56:12.117: INFO: Created: latency-svc-62kkt Oct 30 00:56:12.160: INFO: Got endpoints: latency-svc-xr6z7 [750.43492ms] Oct 30 00:56:12.166: INFO: Created: latency-svc-7645k Oct 30 00:56:12.210: INFO: Got endpoints: latency-svc-8gn5v [750.366051ms] Oct 30 00:56:12.216: INFO: Created: latency-svc-6wkhs Oct 30 00:56:12.260: INFO: Got endpoints: latency-svc-n2m8q [750.059543ms] Oct 30 00:56:12.266: INFO: Created: latency-svc-46bzw Oct 30 00:56:12.310: INFO: Got endpoints: latency-svc-b7fh5 [751.620935ms] Oct 30 00:56:12.315: INFO: Created: latency-svc-txk5d Oct 30 00:56:12.361: INFO: Got endpoints: latency-svc-qkddm [751.701166ms] Oct 30 00:56:12.367: INFO: Created: latency-svc-dr8lq Oct 30 00:56:12.410: INFO: Got endpoints: latency-svc-g8h4f [749.787456ms] Oct 30 00:56:12.416: INFO: Created: latency-svc-klqzx Oct 30 00:56:12.460: INFO: Got endpoints: latency-svc-fjhhc [749.394904ms] Oct 30 00:56:12.465: INFO: Created: latency-svc-v5blj Oct 30 00:56:12.510: INFO: Got endpoints: latency-svc-ksvtd [750.513327ms] Oct 30 00:56:12.515: INFO: Created: latency-svc-5z8nw Oct 30 00:56:12.560: INFO: Got endpoints: latency-svc-9fspj [748.910911ms] Oct 30 00:56:12.566: INFO: Created: latency-svc-d2fmb Oct 30 00:56:12.610: INFO: Got endpoints: latency-svc-clsqz [750.532301ms] Oct 30 00:56:12.616: INFO: Created: latency-svc-bzt2m Oct 30 00:56:12.659: INFO: Got endpoints: latency-svc-cdqkj [750.370798ms] Oct 30 00:56:12.664: INFO: Created: latency-svc-gzshx Oct 30 00:56:12.710: INFO: Got endpoints: latency-svc-wv7cm [750.39383ms] Oct 30 00:56:12.716: INFO: Created: latency-svc-g4d4k Oct 30 00:56:12.761: INFO: Got endpoints: latency-svc-gcbhx [751.116718ms] Oct 30 00:56:12.767: INFO: Created: latency-svc-hfh4v Oct 30 00:56:12.810: INFO: Got endpoints: latency-svc-pdtbv [750.567993ms] Oct 30 00:56:12.816: INFO: Created: latency-svc-qnqvx Oct 30 00:56:12.860: INFO: Got endpoints: latency-svc-62kkt [749.592ms] Oct 30 00:56:12.866: INFO: Created: latency-svc-2tccq Oct 30 00:56:12.910: INFO: Got endpoints: latency-svc-7645k [749.916497ms] Oct 30 00:56:12.916: INFO: Created: latency-svc-mmhnb Oct 30 00:56:12.960: INFO: Got endpoints: latency-svc-6wkhs [749.53299ms] Oct 30 00:56:12.965: INFO: Created: latency-svc-v27jz Oct 30 00:56:13.011: INFO: Got endpoints: latency-svc-46bzw [751.083942ms] Oct 30 00:56:13.017: INFO: Created: latency-svc-fr2lt Oct 30 00:56:13.060: INFO: Got endpoints: latency-svc-txk5d [749.340695ms] Oct 30 00:56:13.066: INFO: Created: latency-svc-5kpvd Oct 30 00:56:13.110: INFO: Got endpoints: latency-svc-dr8lq [749.063214ms] Oct 30 00:56:13.115: INFO: Created: latency-svc-zwg5j Oct 30 00:56:13.159: INFO: Got endpoints: latency-svc-klqzx [749.304962ms] Oct 30 00:56:13.165: INFO: Created: latency-svc-9fzk7 Oct 30 00:56:13.210: INFO: Got endpoints: latency-svc-v5blj [750.74882ms] Oct 30 00:56:13.217: INFO: Created: latency-svc-kkqkf Oct 30 00:56:13.260: INFO: Got endpoints: latency-svc-5z8nw [750.152327ms] Oct 30 00:56:13.266: INFO: Created: latency-svc-wbslq Oct 30 00:56:13.310: INFO: Got endpoints: latency-svc-d2fmb [750.795617ms] Oct 30 00:56:13.316: INFO: Created: latency-svc-4j86h Oct 30 00:56:13.360: INFO: Got endpoints: latency-svc-bzt2m [749.792112ms] Oct 30 00:56:13.366: INFO: Created: latency-svc-4s8ck Oct 30 00:56:13.410: INFO: Got endpoints: latency-svc-gzshx [750.887637ms] Oct 30 00:56:13.416: INFO: Created: latency-svc-fchwv Oct 30 00:56:13.460: INFO: Got endpoints: latency-svc-g4d4k [749.674ms] Oct 30 00:56:13.465: INFO: Created: latency-svc-5m5lz Oct 30 00:56:13.510: INFO: Got endpoints: latency-svc-hfh4v [748.997418ms] Oct 30 00:56:13.516: INFO: Created: latency-svc-nwm9z Oct 30 00:56:13.559: INFO: Got endpoints: latency-svc-qnqvx [748.858675ms] Oct 30 00:56:13.564: INFO: Created: latency-svc-4q55p Oct 30 00:56:13.610: INFO: Got endpoints: latency-svc-2tccq [750.221611ms] Oct 30 00:56:13.615: INFO: Created: latency-svc-b2hrw Oct 30 00:56:13.659: INFO: Got endpoints: latency-svc-mmhnb [749.213777ms] Oct 30 00:56:13.666: INFO: Created: latency-svc-mnbj5 Oct 30 00:56:13.710: INFO: Got endpoints: latency-svc-v27jz [750.440898ms] Oct 30 00:56:13.716: INFO: Created: latency-svc-blhd4 Oct 30 00:56:13.760: INFO: Got endpoints: latency-svc-fr2lt [748.895178ms] Oct 30 00:56:13.765: INFO: Created: latency-svc-kzn26 Oct 30 00:56:13.809: INFO: Got endpoints: latency-svc-5kpvd [749.261611ms] Oct 30 00:56:13.816: INFO: Created: latency-svc-h5pcx Oct 30 00:56:13.861: INFO: Got endpoints: latency-svc-zwg5j [751.250636ms] Oct 30 00:56:13.866: INFO: Created: latency-svc-qqkcc Oct 30 00:56:13.960: INFO: Got endpoints: latency-svc-9fzk7 [800.856348ms] Oct 30 00:56:13.966: INFO: Created: latency-svc-znq2b Oct 30 00:56:14.010: INFO: Got endpoints: latency-svc-kkqkf [799.914935ms] Oct 30 00:56:14.016: INFO: Created: latency-svc-q6gbx Oct 30 00:56:14.061: INFO: Got endpoints: latency-svc-wbslq [800.393014ms] Oct 30 00:56:14.066: INFO: Created: latency-svc-zqfhh Oct 30 00:56:14.110: INFO: Got endpoints: latency-svc-4j86h [799.265748ms] Oct 30 00:56:14.117: INFO: Created: latency-svc-wwkb8 Oct 30 00:56:14.164: INFO: Got endpoints: latency-svc-4s8ck [803.884113ms] Oct 30 00:56:14.179: INFO: Created: latency-svc-2lvq5 Oct 30 00:56:14.209: INFO: Got endpoints: latency-svc-fchwv [798.832616ms] Oct 30 00:56:14.215: INFO: Created: latency-svc-2h5ns Oct 30 00:56:14.259: INFO: Got endpoints: latency-svc-5m5lz [798.90307ms] Oct 30 00:56:14.264: INFO: Created: latency-svc-bwwqw Oct 30 00:56:14.309: INFO: Got endpoints: latency-svc-nwm9z [798.373359ms] Oct 30 00:56:14.314: INFO: Created: latency-svc-bt8n9 Oct 30 00:56:14.360: INFO: Got endpoints: latency-svc-4q55p [800.917869ms] Oct 30 00:56:14.367: INFO: Created: latency-svc-x8kdc Oct 30 00:56:14.410: INFO: Got endpoints: latency-svc-b2hrw [799.701497ms] Oct 30 00:56:14.417: INFO: Created: latency-svc-l8vjh Oct 30 00:56:14.460: INFO: Got endpoints: latency-svc-mnbj5 [800.515253ms] Oct 30 00:56:14.466: INFO: Created: latency-svc-zdn6z Oct 30 00:56:14.510: INFO: Got endpoints: latency-svc-blhd4 [800.009018ms] Oct 30 00:56:14.518: INFO: Created: latency-svc-x4mcl Oct 30 00:56:14.560: INFO: Got endpoints: latency-svc-kzn26 [799.864644ms] Oct 30 00:56:14.566: INFO: Created: latency-svc-zs8x2 Oct 30 00:56:14.611: INFO: Got endpoints: latency-svc-h5pcx [801.806129ms] Oct 30 00:56:14.616: INFO: Created: latency-svc-z6clq Oct 30 00:56:14.661: INFO: Got endpoints: latency-svc-qqkcc [799.245335ms] Oct 30 00:56:14.666: INFO: Created: latency-svc-vmtkl Oct 30 00:56:14.710: INFO: Got endpoints: latency-svc-znq2b [749.96636ms] Oct 30 00:56:14.715: INFO: Created: latency-svc-c5lfr Oct 30 00:56:14.760: INFO: Got endpoints: latency-svc-q6gbx [749.484338ms] Oct 30 00:56:14.765: INFO: Created: latency-svc-tt26m Oct 30 00:56:14.810: INFO: Got endpoints: latency-svc-zqfhh [749.021229ms] Oct 30 00:56:14.816: INFO: Created: latency-svc-mmpxd Oct 30 00:56:14.859: INFO: Got endpoints: latency-svc-wwkb8 [749.278459ms] Oct 30 00:56:14.865: INFO: Created: latency-svc-kvlx4 Oct 30 00:56:14.910: INFO: Got endpoints: latency-svc-2lvq5 [745.774003ms] Oct 30 00:56:14.917: INFO: Created: latency-svc-bdrvd Oct 30 00:56:14.961: INFO: Got endpoints: latency-svc-2h5ns [751.773642ms] Oct 30 00:56:14.977: INFO: Created: latency-svc-whcdz Oct 30 00:56:15.010: INFO: Got endpoints: latency-svc-bwwqw [751.012936ms] Oct 30 00:56:15.015: INFO: Created: latency-svc-zd9v9 Oct 30 00:56:15.060: INFO: Got endpoints: latency-svc-bt8n9 [751.06162ms] Oct 30 00:56:15.067: INFO: Created: latency-svc-8455j Oct 30 00:56:15.110: INFO: Got endpoints: latency-svc-x8kdc [749.923601ms] Oct 30 00:56:15.116: INFO: Created: latency-svc-crb2j Oct 30 00:56:15.160: INFO: Got endpoints: latency-svc-l8vjh [749.870003ms] Oct 30 00:56:15.210: INFO: Got endpoints: latency-svc-zdn6z [749.369321ms] Oct 30 00:56:15.259: INFO: Got endpoints: latency-svc-x4mcl [749.088379ms] Oct 30 00:56:15.309: INFO: Got endpoints: latency-svc-zs8x2 [749.495212ms] Oct 30 00:56:15.359: INFO: Got endpoints: latency-svc-z6clq [748.300737ms] Oct 30 00:56:15.410: INFO: Got endpoints: latency-svc-vmtkl [749.12822ms] Oct 30 00:56:15.460: INFO: Got endpoints: latency-svc-c5lfr [749.834486ms] Oct 30 00:56:15.510: INFO: Got endpoints: latency-svc-tt26m [749.764723ms] Oct 30 00:56:15.562: INFO: Got endpoints: latency-svc-mmpxd [752.678059ms] Oct 30 00:56:15.610: INFO: Got endpoints: latency-svc-kvlx4 [750.602904ms] Oct 30 00:56:15.660: INFO: Got endpoints: latency-svc-bdrvd [749.435538ms] Oct 30 00:56:15.710: INFO: Got endpoints: latency-svc-whcdz [748.902641ms] Oct 30 00:56:15.761: INFO: Got endpoints: latency-svc-zd9v9 [750.690011ms] Oct 30 00:56:15.809: INFO: Got endpoints: latency-svc-8455j [749.767339ms] Oct 30 00:56:15.860: INFO: Got endpoints: latency-svc-crb2j [749.260377ms] Oct 30 00:56:15.860: INFO: Latencies: [7.931457ms 8.131687ms 9.969489ms 13.431034ms 15.832531ms 18.727509ms 22.080284ms 24.495736ms 26.377532ms 29.658037ms 32.106214ms 34.319212ms 37.489396ms 39.670988ms 41.044345ms 41.066059ms 41.140129ms 41.173687ms 41.568509ms 41.918419ms 42.723124ms 43.965091ms 44.825243ms 44.843342ms 44.858424ms 45.029134ms 45.652865ms 46.47876ms 46.516233ms 46.58675ms 50.327517ms 98.211823ms 146.108624ms 193.52781ms 237.806607ms 284.026143ms 332.375416ms 379.074749ms 425.678455ms 473.459936ms 520.433474ms 566.124425ms 614.732894ms 661.744537ms 707.796167ms 745.774003ms 747.004121ms 748.300737ms 748.453323ms 748.488791ms 748.498801ms 748.591193ms 748.644183ms 748.825754ms 748.858675ms 748.884462ms 748.895178ms 748.902641ms 748.910911ms 748.921747ms 748.962164ms 748.997418ms 749.010404ms 749.016759ms 749.021229ms 749.062505ms 749.063214ms 749.088379ms 749.12822ms 749.176401ms 749.20395ms 749.213777ms 749.254091ms 749.260377ms 749.261611ms 749.278459ms 749.281668ms 749.287794ms 749.304962ms 749.326317ms 749.33739ms 749.340695ms 749.35879ms 749.369321ms 749.375543ms 749.394904ms 749.407289ms 749.412504ms 749.424419ms 749.435538ms 749.436695ms 749.484338ms 749.495212ms 749.50967ms 749.51571ms 749.53299ms 749.536703ms 749.568368ms 749.592ms 749.605895ms 749.615172ms 749.640363ms 749.643048ms 749.646685ms 749.674ms 749.764723ms 749.767339ms 749.772283ms 749.787456ms 749.792112ms 749.794205ms 749.834486ms 749.840712ms 749.842154ms 749.870003ms 749.916497ms 749.923601ms 749.96636ms 750.059543ms 750.069146ms 750.08153ms 750.126708ms 750.152327ms 750.177916ms 750.178407ms 750.188079ms 750.189861ms 750.202567ms 750.218059ms 750.221611ms 750.250214ms 750.272754ms 750.285293ms 750.29505ms 750.343121ms 750.366051ms 750.370798ms 750.39383ms 750.401819ms 750.424309ms 750.427775ms 750.43492ms 750.440898ms 750.444093ms 750.449139ms 750.453111ms 750.454126ms 750.511751ms 750.513327ms 750.513932ms 750.532301ms 750.535365ms 750.567993ms 750.595247ms 750.595293ms 750.602904ms 750.622698ms 750.642589ms 750.644314ms 750.657134ms 750.668634ms 750.690011ms 750.74882ms 750.795617ms 750.830204ms 750.858119ms 750.887637ms 750.921905ms 750.956687ms 750.96635ms 751.012936ms 751.06162ms 751.077635ms 751.083942ms 751.116718ms 751.128598ms 751.130335ms 751.226065ms 751.250636ms 751.276879ms 751.620935ms 751.701166ms 751.773642ms 752.678059ms 753.44036ms 798.373359ms 798.832616ms 798.90307ms 799.245335ms 799.265748ms 799.701497ms 799.864644ms 799.914935ms 800.009018ms 800.393014ms 800.515253ms 800.856348ms 800.917869ms 801.806129ms 803.884113ms] Oct 30 00:56:15.860: INFO: 50 %ile: 749.615172ms Oct 30 00:56:15.860: INFO: 90 %ile: 751.620935ms Oct 30 00:56:15.860: INFO: 99 %ile: 801.806129ms Oct 30 00:56:15.860: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:15.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2660" for this suite. • [SLOW TEST:12.822 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":13,"skipped":308,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:10.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:56:10.894: INFO: Waiting up to 5m0s for pod "downwardapi-volume-778db969-1298-4325-8c64-922493e8b46d" in namespace "downward-api-3015" to be "Succeeded or Failed" Oct 30 00:56:10.896: INFO: Pod "downwardapi-volume-778db969-1298-4325-8c64-922493e8b46d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241183ms Oct 30 00:56:12.900: INFO: Pod "downwardapi-volume-778db969-1298-4325-8c64-922493e8b46d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006388724s Oct 30 00:56:14.905: INFO: Pod "downwardapi-volume-778db969-1298-4325-8c64-922493e8b46d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010853565s Oct 30 00:56:16.908: INFO: Pod "downwardapi-volume-778db969-1298-4325-8c64-922493e8b46d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014122444s STEP: Saw pod success Oct 30 00:56:16.908: INFO: Pod "downwardapi-volume-778db969-1298-4325-8c64-922493e8b46d" satisfied condition "Succeeded or Failed" Oct 30 00:56:16.910: INFO: Trying to get logs from node node1 pod downwardapi-volume-778db969-1298-4325-8c64-922493e8b46d container client-container: STEP: delete the pod Oct 30 00:56:17.061: INFO: Waiting for pod downwardapi-volume-778db969-1298-4325-8c64-922493e8b46d to disappear Oct 30 00:56:17.064: INFO: Pod downwardapi-volume-778db969-1298-4325-8c64-922493e8b46d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:17.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3015" for this suite. • [SLOW TEST:6.210 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:15.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 30 00:56:20.329: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:20.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5949" for this suite. • [SLOW TEST:5.068 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":323,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:14.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-75068afd-3e83-4c9c-b907-3f114d4e64a0 STEP: Creating a pod to test consume configMaps Oct 30 00:56:14.952: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7fa6dde7-4fdc-44a4-b709-ddd617cbc533" in namespace "projected-6787" to be "Succeeded or Failed" Oct 30 00:56:14.956: INFO: Pod "pod-projected-configmaps-7fa6dde7-4fdc-44a4-b709-ddd617cbc533": Phase="Pending", Reason="", readiness=false. Elapsed: 3.44735ms Oct 30 00:56:16.958: INFO: Pod "pod-projected-configmaps-7fa6dde7-4fdc-44a4-b709-ddd617cbc533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005866785s Oct 30 00:56:18.963: INFO: Pod "pod-projected-configmaps-7fa6dde7-4fdc-44a4-b709-ddd617cbc533": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010481612s Oct 30 00:56:20.967: INFO: Pod "pod-projected-configmaps-7fa6dde7-4fdc-44a4-b709-ddd617cbc533": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014222033s STEP: Saw pod success Oct 30 00:56:20.967: INFO: Pod "pod-projected-configmaps-7fa6dde7-4fdc-44a4-b709-ddd617cbc533" satisfied condition "Succeeded or Failed" Oct 30 00:56:20.969: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-7fa6dde7-4fdc-44a4-b709-ddd617cbc533 container projected-configmap-volume-test: STEP: delete the pod Oct 30 00:56:20.987: INFO: Waiting for pod pod-projected-configmaps-7fa6dde7-4fdc-44a4-b709-ddd617cbc533 to disappear Oct 30 00:56:20.990: INFO: Pod pod-projected-configmaps-7fa6dde7-4fdc-44a4-b709-ddd617cbc533 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:20.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6787" for this suite. • [SLOW TEST:6.081 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":130,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:20.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 30 00:56:20.381: INFO: Waiting up to 5m0s for pod "pod-4cc2ee74-f1cb-47c4-a2a5-77255395a4b3" in namespace "emptydir-4505" to be "Succeeded or Failed" Oct 30 00:56:20.383: INFO: Pod "pod-4cc2ee74-f1cb-47c4-a2a5-77255395a4b3": Phase="Pending", Reason="", readiness=false. Elapsed: 1.985226ms Oct 30 00:56:22.387: INFO: Pod "pod-4cc2ee74-f1cb-47c4-a2a5-77255395a4b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006398961s Oct 30 00:56:24.391: INFO: Pod "pod-4cc2ee74-f1cb-47c4-a2a5-77255395a4b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010507519s Oct 30 00:56:26.395: INFO: Pod "pod-4cc2ee74-f1cb-47c4-a2a5-77255395a4b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013877532s STEP: Saw pod success Oct 30 00:56:26.395: INFO: Pod "pod-4cc2ee74-f1cb-47c4-a2a5-77255395a4b3" satisfied condition "Succeeded or Failed" Oct 30 00:56:26.397: INFO: Trying to get logs from node node1 pod pod-4cc2ee74-f1cb-47c4-a2a5-77255395a4b3 container test-container: STEP: delete the pod Oct 30 00:56:26.408: INFO: Waiting for pod pod-4cc2ee74-f1cb-47c4-a2a5-77255395a4b3 to disappear Oct 30 00:56:26.410: INFO: Pod pod-4cc2ee74-f1cb-47c4-a2a5-77255395a4b3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:26.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4505" for this suite. • [SLOW TEST:6.067 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":325,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":359,"failed":0} [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:17.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 30 00:56:17.106: INFO: Waiting up to 5m0s for pod "security-context-733227b1-627d-4e0e-8bbe-05e32e8da3d4" in namespace "security-context-1108" to be "Succeeded or Failed" Oct 30 00:56:17.108: INFO: Pod "security-context-733227b1-627d-4e0e-8bbe-05e32e8da3d4": Phase="Pending", Reason="", readiness=false. Elapsed: 1.975698ms Oct 30 00:56:19.112: INFO: Pod "security-context-733227b1-627d-4e0e-8bbe-05e32e8da3d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006195863s Oct 30 00:56:21.116: INFO: Pod "security-context-733227b1-627d-4e0e-8bbe-05e32e8da3d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009619876s Oct 30 00:56:23.120: INFO: Pod "security-context-733227b1-627d-4e0e-8bbe-05e32e8da3d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013820843s Oct 30 00:56:25.124: INFO: Pod "security-context-733227b1-627d-4e0e-8bbe-05e32e8da3d4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017997351s Oct 30 00:56:27.127: INFO: Pod "security-context-733227b1-627d-4e0e-8bbe-05e32e8da3d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.020931308s STEP: Saw pod success Oct 30 00:56:27.127: INFO: Pod "security-context-733227b1-627d-4e0e-8bbe-05e32e8da3d4" satisfied condition "Succeeded or Failed" Oct 30 00:56:27.130: INFO: Trying to get logs from node node1 pod security-context-733227b1-627d-4e0e-8bbe-05e32e8da3d4 container test-container: STEP: delete the pod Oct 30 00:56:27.141: INFO: Waiting for pod security-context-733227b1-627d-4e0e-8bbe-05e32e8da3d4 to disappear Oct 30 00:56:27.143: INFO: Pod security-context-733227b1-627d-4e0e-8bbe-05e32e8da3d4 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:27.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1108" for this suite. • [SLOW TEST:10.075 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":22,"skipped":359,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:46.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod Oct 30 00:55:47.140: INFO: Successfully updated pod "var-expansion-d3417a8e-6a6c-4fc4-ac31-d9105bbf4375" STEP: waiting for pod running STEP: deleting the pod gracefully Oct 30 00:55:51.148: INFO: Deleting pod "var-expansion-d3417a8e-6a6c-4fc4-ac31-d9105bbf4375" in namespace "var-expansion-8476" Oct 30 00:55:51.151: INFO: Wait up to 5m0s for pod "var-expansion-d3417a8e-6a6c-4fc4-ac31-d9105bbf4375" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:27.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8476" for this suite. • [SLOW TEST:160.714 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":11,"skipped":161,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:27.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:27.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-2761" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":12,"skipped":165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:15.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:56:15.055: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 30 00:56:23.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7207 --namespace=crd-publish-openapi-7207 create -f -' Oct 30 00:56:23.993: INFO: stderr: "" Oct 30 00:56:23.993: INFO: stdout: "e2e-test-crd-publish-openapi-9807-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 30 00:56:23.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7207 --namespace=crd-publish-openapi-7207 delete e2e-test-crd-publish-openapi-9807-crds test-cr' Oct 30 00:56:24.160: INFO: stderr: "" Oct 30 00:56:24.160: INFO: stdout: "e2e-test-crd-publish-openapi-9807-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Oct 30 00:56:24.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7207 --namespace=crd-publish-openapi-7207 apply -f -' Oct 30 00:56:24.494: INFO: stderr: "" Oct 30 00:56:24.494: INFO: stdout: "e2e-test-crd-publish-openapi-9807-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 30 00:56:24.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7207 --namespace=crd-publish-openapi-7207 delete e2e-test-crd-publish-openapi-9807-crds test-cr' Oct 30 00:56:24.653: INFO: stderr: "" Oct 30 00:56:24.653: INFO: stdout: "e2e-test-crd-publish-openapi-9807-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 30 00:56:24.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7207 explain e2e-test-crd-publish-openapi-9807-crds' Oct 30 00:56:24.960: INFO: stderr: "" Oct 30 00:56:24.960: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9807-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:28.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7207" for this suite. • [SLOW TEST:13.461 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":15,"skipped":190,"failed":0} S ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:15.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod Oct 30 00:56:15.934: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:56:17.936: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:56:19.937: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:56:21.937: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Oct 30 00:56:21.952: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:56:23.954: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:56:25.957: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:56:27.955: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Oct 30 00:56:27.958: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9361 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:56:27.958: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:56:28.046: INFO: Exec stderr: "" Oct 30 00:56:28.047: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9361 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:56:28.047: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:56:28.126: INFO: Exec stderr: "" Oct 30 00:56:28.127: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9361 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:56:28.127: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:56:28.204: INFO: Exec stderr: "" Oct 30 00:56:28.204: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9361 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:56:28.204: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:56:28.286: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Oct 30 00:56:28.286: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9361 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:56:28.286: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:56:28.437: INFO: Exec stderr: "" Oct 30 00:56:28.437: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9361 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:56:28.437: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:56:28.528: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Oct 30 00:56:28.528: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9361 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:56:28.528: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:56:28.624: INFO: Exec stderr: "" Oct 30 00:56:28.624: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9361 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:56:28.624: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:56:28.843: INFO: Exec stderr: "" Oct 30 00:56:28.843: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9361 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:56:28.843: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:56:28.936: INFO: Exec stderr: "" Oct 30 00:56:28.936: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9361 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:56:28.936: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:56:29.022: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:29.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9361" for this suite. • [SLOW TEST:13.130 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:26.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 30 00:56:26.508: INFO: Waiting up to 5m0s for pod "pod-56121d04-e355-4676-8a50-1ec920f2899d" in namespace "emptydir-7472" to be "Succeeded or Failed" Oct 30 00:56:26.510: INFO: Pod "pod-56121d04-e355-4676-8a50-1ec920f2899d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.972554ms Oct 30 00:56:28.512: INFO: Pod "pod-56121d04-e355-4676-8a50-1ec920f2899d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004610185s Oct 30 00:56:30.517: INFO: Pod "pod-56121d04-e355-4676-8a50-1ec920f2899d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009561726s STEP: Saw pod success Oct 30 00:56:30.517: INFO: Pod "pod-56121d04-e355-4676-8a50-1ec920f2899d" satisfied condition "Succeeded or Failed" Oct 30 00:56:30.520: INFO: Trying to get logs from node node2 pod pod-56121d04-e355-4676-8a50-1ec920f2899d container test-container: STEP: delete the pod Oct 30 00:56:30.532: INFO: Waiting for pod pod-56121d04-e355-4676-8a50-1ec920f2899d to disappear Oct 30 00:56:30.534: INFO: Pod pod-56121d04-e355-4676-8a50-1ec920f2899d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:30.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7472" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":361,"failed":0} [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:30.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-a7cc8bb3-711b-4a4a-b5a2-f6e30e5d58c8 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:30.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6480" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":26,"skipped":361,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:30.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:56:30.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2972 version' Oct 30 00:56:30.717: INFO: stderr: "" Oct 30 00:56:30.717: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.5\", GitCommit:\"aea7bbadd2fc0cd689de94a54e5b7b758869d691\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:10:45Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:12:29Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:30.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2972" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":27,"skipped":372,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:29.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Oct 30 00:56:29.121: INFO: Pod name pod-release: Found 0 pods out of 1 Oct 30 00:56:34.124: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:35.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7274" for this suite. • [SLOW TEST:6.055 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":15,"skipped":357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:53:58.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-9177 STEP: creating service affinity-nodeport in namespace services-9177 STEP: creating replication controller affinity-nodeport in namespace services-9177 I1030 00:53:58.285363 29 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-9177, replica count: 3 I1030 00:54:01.336953 29 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:54:04.337977 29 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:54:07.338508 29 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 00:54:07.346: INFO: Creating new exec pod Oct 30 00:54:14.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Oct 30 00:54:14.619: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Oct 30 00:54:14.619: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 00:54:14.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.30.242 80' Oct 30 00:54:14.862: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.30.242 80\nConnection to 10.233.30.242 80 port [tcp/http] succeeded!\n" Oct 30 00:54:14.862: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 00:54:14.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:15.089: INFO: rc: 1 Oct 30 00:54:15.089: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:16.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:16.700: INFO: rc: 1 Oct 30 00:54:16.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:17.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:17.330: INFO: rc: 1 Oct 30 00:54:17.330: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:18.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:18.432: INFO: rc: 1 Oct 30 00:54:18.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:19.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:19.646: INFO: rc: 1 Oct 30 00:54:19.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:20.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:20.345: INFO: rc: 1 Oct 30 00:54:20.345: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:21.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:21.530: INFO: rc: 1 Oct 30 00:54:21.530: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:22.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:22.635: INFO: rc: 1 Oct 30 00:54:22.635: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:23.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:23.323: INFO: rc: 1 Oct 30 00:54:23.323: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:24.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:24.339: INFO: rc: 1 Oct 30 00:54:24.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:25.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:25.332: INFO: rc: 1 Oct 30 00:54:25.332: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:26.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:26.755: INFO: rc: 1 Oct 30 00:54:26.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:27.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:27.642: INFO: rc: 1 Oct 30 00:54:27.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:28.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:28.363: INFO: rc: 1 Oct 30 00:54:28.363: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:29.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:29.343: INFO: rc: 1 Oct 30 00:54:29.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:30.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:30.495: INFO: rc: 1 Oct 30 00:54:30.496: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:31.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:31.385: INFO: rc: 1 Oct 30 00:54:31.385: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:32.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:32.413: INFO: rc: 1 Oct 30 00:54:32.413: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:33.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:33.343: INFO: rc: 1 Oct 30 00:54:33.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:34.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:34.341: INFO: rc: 1 Oct 30 00:54:34.341: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:35.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:35.331: INFO: rc: 1 Oct 30 00:54:35.331: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:36.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:36.343: INFO: rc: 1 Oct 30 00:54:36.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:37.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:37.327: INFO: rc: 1 Oct 30 00:54:37.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:38.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:38.343: INFO: rc: 1 Oct 30 00:54:38.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:39.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:39.334: INFO: rc: 1 Oct 30 00:54:39.334: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:40.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:40.318: INFO: rc: 1 Oct 30 00:54:40.318: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:41.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:41.342: INFO: rc: 1 Oct 30 00:54:41.342: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:42.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:42.356: INFO: rc: 1 Oct 30 00:54:42.357: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:43.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:43.320: INFO: rc: 1 Oct 30 00:54:43.320: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:44.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:44.327: INFO: rc: 1 Oct 30 00:54:44.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:45.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:45.345: INFO: rc: 1 Oct 30 00:54:45.345: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:46.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:46.336: INFO: rc: 1 Oct 30 00:54:46.336: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:47.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:47.315: INFO: rc: 1 Oct 30 00:54:47.315: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:48.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:48.339: INFO: rc: 1 Oct 30 00:54:48.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:49.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:49.367: INFO: rc: 1 Oct 30 00:54:49.368: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:50.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:50.392: INFO: rc: 1 Oct 30 00:54:50.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:51.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:51.339: INFO: rc: 1 Oct 30 00:54:51.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:52.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:52.348: INFO: rc: 1 Oct 30 00:54:52.348: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:53.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:53.321: INFO: rc: 1 Oct 30 00:54:53.322: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:54.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:54.626: INFO: rc: 1 Oct 30 00:54:54.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:55.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:55.745: INFO: rc: 1 Oct 30 00:54:55.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:56.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:56.423: INFO: rc: 1 Oct 30 00:54:56.423: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:57.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:57.335: INFO: rc: 1 Oct 30 00:54:57.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:58.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:58.587: INFO: rc: 1 Oct 30 00:54:58.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:54:59.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:54:59.452: INFO: rc: 1 Oct 30 00:54:59.452: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:00.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:00.563: INFO: rc: 1 Oct 30 00:55:00.563: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:01.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:01.330: INFO: rc: 1 Oct 30 00:55:01.330: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:02.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:02.417: INFO: rc: 1 Oct 30 00:55:02.417: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:03.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:03.321: INFO: rc: 1 Oct 30 00:55:03.321: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:04.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:04.307: INFO: rc: 1 Oct 30 00:55:04.307: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:05.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:05.350: INFO: rc: 1 Oct 30 00:55:05.350: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31012 + echo hostName nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:06.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:06.371: INFO: rc: 1 Oct 30 00:55:06.371: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:07.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:07.317: INFO: rc: 1 Oct 30 00:55:07.317: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:08.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:08.316: INFO: rc: 1 Oct 30 00:55:08.316: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:09.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:09.322: INFO: rc: 1 Oct 30 00:55:09.322: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31012 + echo hostName nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:10.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:10.719: INFO: rc: 1 Oct 30 00:55:10.719: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:11.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:12.340: INFO: rc: 1 Oct 30 00:55:12.340: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:13.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:13.824: INFO: rc: 1 Oct 30 00:55:13.824: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:14.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:14.367: INFO: rc: 1 Oct 30 00:55:14.367: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:15.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:15.356: INFO: rc: 1 Oct 30 00:55:15.357: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:16.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:16.381: INFO: rc: 1 Oct 30 00:55:16.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:17.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:17.663: INFO: rc: 1 Oct 30 00:55:17.663: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:18.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:18.370: INFO: rc: 1 Oct 30 00:55:18.370: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:19.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:19.338: INFO: rc: 1 Oct 30 00:55:19.338: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:20.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:20.305: INFO: rc: 1 Oct 30 00:55:20.305: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:21.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:21.568: INFO: rc: 1 Oct 30 00:55:21.568: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:22.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:22.349: INFO: rc: 1 Oct 30 00:55:22.349: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:23.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:23.330: INFO: rc: 1 Oct 30 00:55:23.330: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:24.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:24.340: INFO: rc: 1 Oct 30 00:55:24.340: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31012 + echo hostName nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:25.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:25.338: INFO: rc: 1 Oct 30 00:55:25.338: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:26.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:26.335: INFO: rc: 1 Oct 30 00:55:26.336: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:27.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:27.321: INFO: rc: 1 Oct 30 00:55:27.321: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:28.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:28.316: INFO: rc: 1 Oct 30 00:55:28.316: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31012 + echo hostName nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:29.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:29.334: INFO: rc: 1 Oct 30 00:55:29.334: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:30.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:30.335: INFO: rc: 1 Oct 30 00:55:30.336: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:31.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:31.330: INFO: rc: 1 Oct 30 00:55:31.330: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:32.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:32.355: INFO: rc: 1 Oct 30 00:55:32.355: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:33.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:33.392: INFO: rc: 1 Oct 30 00:55:33.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:34.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:34.341: INFO: rc: 1 Oct 30 00:55:34.341: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:35.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:35.444: INFO: rc: 1 Oct 30 00:55:35.445: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:36.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:36.357: INFO: rc: 1 Oct 30 00:55:36.357: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:37.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:37.327: INFO: rc: 1 Oct 30 00:55:37.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:38.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:38.487: INFO: rc: 1 Oct 30 00:55:38.487: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:39.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:39.497: INFO: rc: 1 Oct 30 00:55:39.498: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:40.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:40.322: INFO: rc: 1 Oct 30 00:55:40.322: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:41.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:42.693: INFO: rc: 1 Oct 30 00:55:42.693: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:43.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:43.338: INFO: rc: 1 Oct 30 00:55:43.338: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:44.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:44.439: INFO: rc: 1 Oct 30 00:55:44.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:45.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:45.682: INFO: rc: 1 Oct 30 00:55:45.682: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:46.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:46.392: INFO: rc: 1 Oct 30 00:55:46.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:47.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:47.535: INFO: rc: 1 Oct 30 00:55:47.535: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:48.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:48.312: INFO: rc: 1 Oct 30 00:55:48.312: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:49.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:49.329: INFO: rc: 1 Oct 30 00:55:49.329: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:50.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:50.350: INFO: rc: 1 Oct 30 00:55:50.350: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:51.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:51.346: INFO: rc: 1 Oct 30 00:55:51.346: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:52.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:52.338: INFO: rc: 1 Oct 30 00:55:52.338: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:53.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:53.366: INFO: rc: 1 Oct 30 00:55:53.366: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:54.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:54.386: INFO: rc: 1 Oct 30 00:55:54.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:55.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:55.335: INFO: rc: 1 Oct 30 00:55:55.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:56.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:56.358: INFO: rc: 1 Oct 30 00:55:56.358: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:57.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:57.326: INFO: rc: 1 Oct 30 00:55:57.326: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:58.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:58.432: INFO: rc: 1 Oct 30 00:55:58.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:55:59.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:55:59.337: INFO: rc: 1 Oct 30 00:55:59.337: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:00.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:00.321: INFO: rc: 1 Oct 30 00:56:00.322: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:01.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:01.335: INFO: rc: 1 Oct 30 00:56:01.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:02.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:02.339: INFO: rc: 1 Oct 30 00:56:02.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:03.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:03.466: INFO: rc: 1 Oct 30 00:56:03.467: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:04.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:04.737: INFO: rc: 1 Oct 30 00:56:04.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:05.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:05.324: INFO: rc: 1 Oct 30 00:56:05.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:06.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:06.332: INFO: rc: 1 Oct 30 00:56:06.332: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:07.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:07.338: INFO: rc: 1 Oct 30 00:56:07.338: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:08.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:08.359: INFO: rc: 1 Oct 30 00:56:08.359: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:09.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:09.348: INFO: rc: 1 Oct 30 00:56:09.348: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:10.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:10.341: INFO: rc: 1 Oct 30 00:56:10.341: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:11.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:12.292: INFO: rc: 1 Oct 30 00:56:12.292: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:13.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:13.337: INFO: rc: 1 Oct 30 00:56:13.337: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:14.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:14.634: INFO: rc: 1 Oct 30 00:56:14.634: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:15.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:15.316: INFO: rc: 1 Oct 30 00:56:15.316: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:15.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012' Oct 30 00:56:15.564: INFO: rc: 1 Oct 30 00:56:15.564: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9177 exec execpod-affinitygk2qr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31012: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31012 nc: connect to 10.10.190.207 port 31012 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:56:15.565: FAIL: Unexpected error: <*errors.errorString | 0xc0014e2f30>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31012 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31012 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000d0e9a0, 0x779f8f8, 0xc001d9ec60, 0xc001829b80, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2531 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001901980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001901980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001901980, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 30 00:56:15.566: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-9177, will wait for the garbage collector to delete the pods Oct 30 00:56:15.642: INFO: Deleting ReplicationController affinity-nodeport took: 3.474371ms Oct 30 00:56:15.743: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.434305ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-9177". STEP: Found 27 events. Oct 30 00:56:33.460: INFO: At 2021-10-30 00:53:58 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-lpc5m Oct 30 00:56:33.460: INFO: At 2021-10-30 00:53:58 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-jkk2m Oct 30 00:56:33.460: INFO: At 2021-10-30 00:53:58 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-h9dhq Oct 30 00:56:33.460: INFO: At 2021-10-30 00:53:58 +0000 UTC - event for affinity-nodeport-h9dhq: {default-scheduler } Scheduled: Successfully assigned services-9177/affinity-nodeport-h9dhq to node1 Oct 30 00:56:33.460: INFO: At 2021-10-30 00:53:58 +0000 UTC - event for affinity-nodeport-jkk2m: {default-scheduler } Scheduled: Successfully assigned services-9177/affinity-nodeport-jkk2m to node2 Oct 30 00:56:33.460: INFO: At 2021-10-30 00:53:58 +0000 UTC - event for affinity-nodeport-lpc5m: {default-scheduler } Scheduled: Successfully assigned services-9177/affinity-nodeport-lpc5m to node1 Oct 30 00:56:33.460: INFO: At 2021-10-30 00:53:59 +0000 UTC - event for affinity-nodeport-jkk2m: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:00 +0000 UTC - event for affinity-nodeport-jkk2m: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 347.368332ms Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:00 +0000 UTC - event for affinity-nodeport-jkk2m: {kubelet node2} Created: Created container affinity-nodeport Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:01 +0000 UTC - event for affinity-nodeport-h9dhq: {kubelet node1} Created: Created container affinity-nodeport Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:01 +0000 UTC - event for affinity-nodeport-h9dhq: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:01 +0000 UTC - event for affinity-nodeport-h9dhq: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 547.13392ms Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:01 +0000 UTC - event for affinity-nodeport-jkk2m: {kubelet node2} Started: Started container affinity-nodeport Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:01 +0000 UTC - event for affinity-nodeport-lpc5m: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:02 +0000 UTC - event for affinity-nodeport-h9dhq: {kubelet node1} Started: Started container affinity-nodeport Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:02 +0000 UTC - event for affinity-nodeport-lpc5m: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 347.599825ms Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:03 +0000 UTC - event for affinity-nodeport-lpc5m: {kubelet node1} Started: Started container affinity-nodeport Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:03 +0000 UTC - event for affinity-nodeport-lpc5m: {kubelet node1} Created: Created container affinity-nodeport Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:07 +0000 UTC - event for execpod-affinitygk2qr: {default-scheduler } Scheduled: Successfully assigned services-9177/execpod-affinitygk2qr to node2 Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:09 +0000 UTC - event for execpod-affinitygk2qr: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:10 +0000 UTC - event for execpod-affinitygk2qr: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 874.763879ms Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:10 +0000 UTC - event for execpod-affinitygk2qr: {kubelet node2} Created: Created container agnhost-container Oct 30 00:56:33.460: INFO: At 2021-10-30 00:54:10 +0000 UTC - event for execpod-affinitygk2qr: {kubelet node2} Started: Started container agnhost-container Oct 30 00:56:33.460: INFO: At 2021-10-30 00:56:15 +0000 UTC - event for affinity-nodeport-h9dhq: {kubelet node1} Killing: Stopping container affinity-nodeport Oct 30 00:56:33.460: INFO: At 2021-10-30 00:56:15 +0000 UTC - event for affinity-nodeport-jkk2m: {kubelet node2} Killing: Stopping container affinity-nodeport Oct 30 00:56:33.460: INFO: At 2021-10-30 00:56:15 +0000 UTC - event for affinity-nodeport-lpc5m: {kubelet node1} Killing: Stopping container affinity-nodeport Oct 30 00:56:33.460: INFO: At 2021-10-30 00:56:15 +0000 UTC - event for execpod-affinitygk2qr: {kubelet node2} Killing: Stopping container agnhost-container Oct 30 00:56:33.462: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 00:56:33.462: INFO: Oct 30 00:56:33.466: INFO: Logging node info for node master1 Oct 30 00:56:33.468: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 66972 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:27 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:27 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:27 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:56:27 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:56:33.469: INFO: Logging kubelet events for node master1 Oct 30 00:56:33.472: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 00:56:33.494: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.494: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 00:56:33.494: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.494: INFO: Container coredns ready: true, restart count 1 Oct 30 00:56:33.494: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:33.494: INFO: Container docker-registry ready: true, restart count 0 Oct 30 00:56:33.494: INFO: Container nginx ready: true, restart count 0 Oct 30 00:56:33.494: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:33.494: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:56:33.494: INFO: Container node-exporter ready: true, restart count 0 Oct 30 00:56:33.494: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.494: INFO: Container kube-scheduler ready: true, restart count 0 Oct 30 00:56:33.494: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.494: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 30 00:56:33.494: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:56:33.494: INFO: Init container install-cni ready: true, restart count 0 Oct 30 00:56:33.494: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 00:56:33.494: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.494: INFO: Container kube-multus ready: true, restart count 1 Oct 30 00:56:33.494: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.494: INFO: Container kube-apiserver ready: true, restart count 0 W1030 00:56:33.508226 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:56:33.578: INFO: Latency metrics for node master1 Oct 30 00:56:33.578: INFO: Logging node info for node master2 Oct 30 00:56:33.581: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 66896 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:23 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:23 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:23 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:56:23 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:56:33.581: INFO: Logging kubelet events for node master2 Oct 30 00:56:33.583: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 00:56:33.591: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.591: INFO: Container kube-multus ready: true, restart count 1 Oct 30 00:56:33.591: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:33.591: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:56:33.591: INFO: Container node-exporter ready: true, restart count 0 Oct 30 00:56:33.591: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.591: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 00:56:33.591: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.591: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 00:56:33.591: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.591: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 00:56:33.591: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.591: INFO: Container kube-proxy ready: true, restart count 2 Oct 30 00:56:33.591: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:56:33.591: INFO: Init container install-cni ready: true, restart count 2 Oct 30 00:56:33.591: INFO: Container kube-flannel ready: true, restart count 1 W1030 00:56:33.604612 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:56:33.667: INFO: Latency metrics for node master2 Oct 30 00:56:33.667: INFO: Logging node info for node master3 Oct 30 00:56:33.669: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 67132 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:32 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:32 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:32 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:56:32 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:56:33.670: INFO: Logging kubelet events for node master3 Oct 30 00:56:33.671: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 00:56:33.681: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.681: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 00:56:33.681: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.681: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 00:56:33.681: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.681: INFO: Container autoscaler ready: true, restart count 1 Oct 30 00:56:33.681: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.681: INFO: Container nfd-controller ready: true, restart count 0 Oct 30 00:56:33.681: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.681: INFO: Container coredns ready: true, restart count 1 Oct 30 00:56:33.681: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:33.681: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:56:33.681: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 00:56:33.681: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:33.681: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:56:33.681: INFO: Container node-exporter ready: true, restart count 0 Oct 30 00:56:33.681: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.681: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 30 00:56:33.681: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.681: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 00:56:33.681: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:56:33.681: INFO: Init container install-cni ready: true, restart count 2 Oct 30 00:56:33.681: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 00:56:33.681: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.681: INFO: Container kube-multus ready: true, restart count 1 W1030 00:56:33.696747 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:56:33.776: INFO: Latency metrics for node master3 Oct 30 00:56:33.776: INFO: Logging node info for node node1 Oct 30 00:56:33.779: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 67210 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:32 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:32 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:32 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:56:32 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:56:33.779: INFO: Logging kubelet events for node node1 Oct 30 00:56:33.781: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 00:56:33.874: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.874: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 00:56:33.874: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.874: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 00:56:33.874: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 00:56:33.874: INFO: Container discover ready: false, restart count 0 Oct 30 00:56:33.874: INFO: Container init ready: false, restart count 0 Oct 30 00:56:33.874: INFO: Container install ready: false, restart count 0 Oct 30 00:56:33.874: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:33.874: INFO: Container nodereport ready: true, restart count 0 Oct 30 00:56:33.874: INFO: Container reconcile ready: true, restart count 0 Oct 30 00:56:33.874: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:33.874: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:56:33.874: INFO: Container node-exporter ready: true, restart count 0 Oct 30 00:56:33.874: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 00:56:33.874: INFO: Container config-reloader ready: true, restart count 0 Oct 30 00:56:33.874: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 00:56:33.874: INFO: Container grafana ready: true, restart count 0 Oct 30 00:56:33.874: INFO: Container prometheus ready: true, restart count 1 Oct 30 00:56:33.874: INFO: oidc-discovery-validator started at 2021-10-30 00:56:27 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.874: INFO: Container oidc-discovery-validator ready: false, restart count 0 Oct 30 00:56:33.874: INFO: pod-init-4e769a4b-d18a-491a-a1d7-41247122aca3 started at 2021-10-30 00:56:21 +0000 UTC (2+1 container statuses recorded) Oct 30 00:56:33.874: INFO: Init container init1 ready: false, restart count 1 Oct 30 00:56:33.874: INFO: Init container init2 ready: false, restart count 0 Oct 30 00:56:33.874: INFO: Container run1 ready: false, restart count 0 Oct 30 00:56:33.874: INFO: frontend-685fc574d5-xdjdt started at 2021-10-30 00:56:32 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.874: INFO: Container guestbook-frontend ready: false, restart count 0 Oct 30 00:56:33.874: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.874: INFO: Container kube-multus ready: true, restart count 1 Oct 30 00:56:33.874: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.874: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 00:56:33.874: INFO: ss2-0 started at 2021-10-30 00:56:23 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.874: INFO: Container webserver ready: true, restart count 0 Oct 30 00:56:33.874: INFO: pod-37003f6f-eeed-4517-9527-be2c4f7b2bbd started at 2021-10-30 00:56:28 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.874: INFO: Container test-container ready: false, restart count 0 Oct 30 00:56:33.874: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:56:33.874: INFO: Init container install-cni ready: true, restart count 2 Oct 30 00:56:33.874: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 00:56:33.874: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.874: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 00:56:33.874: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.874: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 00:56:33.874: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 00:56:33.874: INFO: Container collectd ready: true, restart count 0 Oct 30 00:56:33.874: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 00:56:33.874: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 00:56:33.874: INFO: test-host-network-pod started at 2021-10-30 00:56:21 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:33.874: INFO: Container busybox-1 ready: true, restart count 0 Oct 30 00:56:33.874: INFO: Container busybox-2 ready: true, restart count 0 Oct 30 00:56:33.874: INFO: pod-release-28r5x started at 2021-10-30 00:56:29 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:33.874: INFO: Container pod-release ready: false, restart count 0 W1030 00:56:33.890060 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:56:35.144: INFO: Latency metrics for node node1 Oct 30 00:56:35.144: INFO: Logging node info for node node2 Oct 30 00:56:35.147: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 67069 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:30 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:30 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:56:30 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:56:30 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:56:35.148: INFO: Logging kubelet events for node node2 Oct 30 00:56:35.149: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 00:56:35.164: INFO: test-pod started at 2021-10-30 00:56:15 +0000 UTC (0+3 container statuses recorded) Oct 30 00:56:35.164: INFO: Container busybox-1 ready: true, restart count 0 Oct 30 00:56:35.164: INFO: Container busybox-2 ready: true, restart count 0 Oct 30 00:56:35.164: INFO: Container busybox-3 ready: true, restart count 0 Oct 30 00:56:35.164: INFO: agnhost-replica-6bcf79b489-h9dmg started at 2021-10-30 00:56:32 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Container replica ready: false, restart count 0 Oct 30 00:56:35.164: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 00:56:35.164: INFO: ss2-2 started at 2021-10-30 00:55:53 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Container webserver ready: true, restart count 0 Oct 30 00:56:35.164: INFO: agnhost-primary-5db8ddd565-q5tkl started at 2021-10-30 00:56:32 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Container primary ready: false, restart count 0 Oct 30 00:56:35.164: INFO: test-pod started at 2021-10-30 00:54:25 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Container webserver ready: true, restart count 0 Oct 30 00:56:35.164: INFO: ss2-1 started at 2021-10-30 00:56:13 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Container webserver ready: true, restart count 0 Oct 30 00:56:35.164: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Init container install-cni ready: true, restart count 2 Oct 30 00:56:35.164: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 00:56:35.164: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Container kube-multus ready: true, restart count 1 Oct 30 00:56:35.164: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 00:56:35.164: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 00:56:35.164: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 00:56:35.164: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 00:56:35.164: INFO: Container discover ready: false, restart count 0 Oct 30 00:56:35.164: INFO: Container init ready: false, restart count 0 Oct 30 00:56:35.164: INFO: Container install ready: false, restart count 0 Oct 30 00:56:35.164: INFO: sample-webhook-deployment-78988fc6cd-4fgdc started at 2021-10-30 00:56:28 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Container sample-webhook ready: true, restart count 0 Oct 30 00:56:35.164: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 00:56:35.164: INFO: agnhost-replica-6bcf79b489-7bpqp started at 2021-10-30 00:56:32 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Container replica ready: false, restart count 0 Oct 30 00:56:35.164: INFO: to-be-attached-pod started at 2021-10-30 00:56:33 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.164: INFO: Container container1 ready: false, restart count 0 Oct 30 00:56:35.165: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:35.165: INFO: Container nodereport ready: true, restart count 0 Oct 30 00:56:35.165: INFO: Container reconcile ready: true, restart count 0 Oct 30 00:56:35.165: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:56:35.165: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:56:35.165: INFO: Container node-exporter ready: true, restart count 0 Oct 30 00:56:35.165: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.165: INFO: Container tas-extender ready: true, restart count 0 Oct 30 00:56:35.165: INFO: frontend-685fc574d5-4h4bg started at 2021-10-30 00:56:32 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.165: INFO: Container guestbook-frontend ready: false, restart count 0 Oct 30 00:56:35.165: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 00:56:35.165: INFO: Container collectd ready: true, restart count 0 Oct 30 00:56:35.165: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 00:56:35.165: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 00:56:35.165: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.165: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 00:56:35.165: INFO: frontend-685fc574d5-kklxq started at 2021-10-30 00:56:32 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.165: INFO: Container guestbook-frontend ready: false, restart count 0 Oct 30 00:56:35.165: INFO: liveness-00176d17-9bb9-4e06-870f-696107c114d2 started at 2021-10-30 00:55:00 +0000 UTC (0+1 container statuses recorded) Oct 30 00:56:35.165: INFO: Container agnhost-container ready: true, restart count 4 W1030 00:56:35.178438 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:56:36.174: INFO: Latency metrics for node node2 Oct 30 00:56:36.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9177" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [157.929 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:56:15.565: Unexpected error: <*errors.errorString | 0xc0014e2f30>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31012 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31012 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":226,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:28.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 30 00:56:28.532: INFO: Waiting up to 5m0s for pod "pod-37003f6f-eeed-4517-9527-be2c4f7b2bbd" in namespace "emptydir-8387" to be "Succeeded or Failed" Oct 30 00:56:28.534: INFO: Pod "pod-37003f6f-eeed-4517-9527-be2c4f7b2bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316945ms Oct 30 00:56:30.539: INFO: Pod "pod-37003f6f-eeed-4517-9527-be2c4f7b2bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00767652s Oct 30 00:56:32.542: INFO: Pod "pod-37003f6f-eeed-4517-9527-be2c4f7b2bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010460182s Oct 30 00:56:34.547: INFO: Pod "pod-37003f6f-eeed-4517-9527-be2c4f7b2bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015078804s Oct 30 00:56:36.554: INFO: Pod "pod-37003f6f-eeed-4517-9527-be2c4f7b2bbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021758004s STEP: Saw pod success Oct 30 00:56:36.554: INFO: Pod "pod-37003f6f-eeed-4517-9527-be2c4f7b2bbd" satisfied condition "Succeeded or Failed" Oct 30 00:56:36.556: INFO: Trying to get logs from node node1 pod pod-37003f6f-eeed-4517-9527-be2c4f7b2bbd container test-container: STEP: delete the pod Oct 30 00:56:36.569: INFO: Waiting for pod pod-37003f6f-eeed-4517-9527-be2c4f7b2bbd to disappear Oct 30 00:56:36.571: INFO: Pod pod-37003f6f-eeed-4517-9527-be2c4f7b2bbd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:36.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8387" for this suite. • [SLOW TEST:8.078 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":191,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:36.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:36.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-5350" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":17,"skipped":208,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:36.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Oct 30 00:56:37.158: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 00:56:37.172: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 00:56:39.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152197, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152197, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152197, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152197, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 00:56:42.195: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:42.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3028" for this suite. STEP: Destroying namespace "webhook-3028-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.608 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":18,"skipped":216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:42.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:42.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2361" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":19,"skipped":244,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:42.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info Oct 30 00:56:42.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2340 cluster-info' Oct 30 00:56:42.580: INFO: stderr: "" Oct 30 00:56:42.580: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:42.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2340" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:27.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 00:56:27.997: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 00:56:30.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152188, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152188, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152188, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152187, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 00:56:33.018: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Oct 30 00:56:43.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-8059 attach --namespace=webhook-8059 to-be-attached-pod -i -c=container1' Oct 30 00:56:43.221: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:43.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8059" for this suite. STEP: Destroying namespace "webhook-8059-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.991 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":13,"skipped":193,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:30.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components Oct 30 00:56:30.763: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Oct 30 00:56:30.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1430 create -f -' Oct 30 00:56:31.135: INFO: stderr: "" Oct 30 00:56:31.135: INFO: stdout: "service/agnhost-replica created\n" Oct 30 00:56:31.135: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Oct 30 00:56:31.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1430 create -f -' Oct 30 00:56:31.469: INFO: stderr: "" Oct 30 00:56:31.469: INFO: stdout: "service/agnhost-primary created\n" Oct 30 00:56:31.469: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Oct 30 00:56:31.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1430 create -f -' Oct 30 00:56:31.786: INFO: stderr: "" Oct 30 00:56:31.786: INFO: stdout: "service/frontend created\n" Oct 30 00:56:31.787: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Oct 30 00:56:31.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1430 create -f -' Oct 30 00:56:32.076: INFO: stderr: "" Oct 30 00:56:32.076: INFO: stdout: "deployment.apps/frontend created\n" Oct 30 00:56:32.076: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 30 00:56:32.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1430 create -f -' Oct 30 00:56:32.410: INFO: stderr: "" Oct 30 00:56:32.410: INFO: stdout: "deployment.apps/agnhost-primary created\n" Oct 30 00:56:32.410: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 30 00:56:32.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1430 create -f -' Oct 30 00:56:32.747: INFO: stderr: "" Oct 30 00:56:32.747: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Oct 30 00:56:32.747: INFO: Waiting for all frontend pods to be Running. Oct 30 00:56:42.800: INFO: Waiting for frontend to serve content. Oct 30 00:56:42.872: INFO: Trying to add a new entry to the guestbook. Oct 30 00:56:42.881: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Oct 30 00:56:42.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1430 delete --grace-period=0 --force -f -' Oct 30 00:56:43.026: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 00:56:43.026: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Oct 30 00:56:43.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1430 delete --grace-period=0 --force -f -' Oct 30 00:56:43.150: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 00:56:43.150: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 30 00:56:43.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1430 delete --grace-period=0 --force -f -' Oct 30 00:56:43.296: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 00:56:43.297: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 30 00:56:43.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1430 delete --grace-period=0 --force -f -' Oct 30 00:56:43.440: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 00:56:43.440: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 30 00:56:43.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1430 delete --grace-period=0 --force -f -' Oct 30 00:56:43.566: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 00:56:43.566: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 30 00:56:43.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1430 delete --grace-period=0 --force -f -' Oct 30 00:56:43.691: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 00:56:43.691: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:43.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1430" for this suite. • [SLOW TEST:12.957 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":28,"skipped":380,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:36.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7331.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7331.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7331.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7331.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7331.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7331.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7331.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7331.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7331.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 46.0.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.0.46_udp@PTR;check="$$(dig +tcp +noall +answer +search 46.0.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.0.46_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7331.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7331.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7331.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7331.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7331.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7331.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7331.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7331.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7331.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 46.0.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.0.46_udp@PTR;check="$$(dig +tcp +noall +answer +search 46.0.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.0.46_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 00:56:42.248: INFO: Unable to read wheezy_udp@dns-test-service.dns-7331.svc.cluster.local from pod dns-7331/dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0: the server could not find the requested resource (get pods dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0) Oct 30 00:56:42.251: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7331.svc.cluster.local from pod dns-7331/dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0: the server could not find the requested resource (get pods dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0) Oct 30 00:56:42.256: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7331.svc.cluster.local from pod dns-7331/dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0: the server could not find the requested resource (get pods dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0) Oct 30 00:56:42.262: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7331.svc.cluster.local from pod dns-7331/dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0: the server could not find the requested resource (get pods dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0) Oct 30 00:56:42.289: INFO: Unable to read jessie_udp@dns-test-service.dns-7331.svc.cluster.local from pod dns-7331/dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0: the server could not find the requested resource (get pods dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0) Oct 30 00:56:42.291: INFO: Unable to read jessie_tcp@dns-test-service.dns-7331.svc.cluster.local from pod dns-7331/dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0: the server could not find the requested resource (get pods dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0) Oct 30 00:56:42.298: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7331.svc.cluster.local from pod dns-7331/dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0: the server could not find the requested resource (get pods dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0) Oct 30 00:56:42.300: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7331.svc.cluster.local from pod dns-7331/dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0: the server could not find the requested resource (get pods dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0) Oct 30 00:56:42.314: INFO: Lookups using dns-7331/dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0 failed for: [wheezy_udp@dns-test-service.dns-7331.svc.cluster.local wheezy_tcp@dns-test-service.dns-7331.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7331.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7331.svc.cluster.local jessie_udp@dns-test-service.dns-7331.svc.cluster.local jessie_tcp@dns-test-service.dns-7331.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7331.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7331.svc.cluster.local] Oct 30 00:56:47.365: INFO: DNS probes using dns-7331/dns-test-e2bac09d-7e32-458f-b1a2-ad72dab2ddc0 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:47.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7331" for this suite. • [SLOW TEST:11.201 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":12,"skipped":227,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":20,"skipped":252,"failed":0} [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:42.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-6374/configmap-test-d84a3151-a77e-4666-b1e3-e879c656671c STEP: Creating a pod to test consume configMaps Oct 30 00:56:42.627: INFO: Waiting up to 5m0s for pod "pod-configmaps-39f9fa37-19c5-4881-b66f-464140a9f97e" in namespace "configmap-6374" to be "Succeeded or Failed" Oct 30 00:56:42.629: INFO: Pod "pod-configmaps-39f9fa37-19c5-4881-b66f-464140a9f97e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156255ms Oct 30 00:56:44.632: INFO: Pod "pod-configmaps-39f9fa37-19c5-4881-b66f-464140a9f97e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005788736s Oct 30 00:56:46.637: INFO: Pod "pod-configmaps-39f9fa37-19c5-4881-b66f-464140a9f97e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010794326s Oct 30 00:56:48.640: INFO: Pod "pod-configmaps-39f9fa37-19c5-4881-b66f-464140a9f97e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013824598s Oct 30 00:56:50.647: INFO: Pod "pod-configmaps-39f9fa37-19c5-4881-b66f-464140a9f97e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020734563s STEP: Saw pod success Oct 30 00:56:50.647: INFO: Pod "pod-configmaps-39f9fa37-19c5-4881-b66f-464140a9f97e" satisfied condition "Succeeded or Failed" Oct 30 00:56:50.651: INFO: Trying to get logs from node node1 pod pod-configmaps-39f9fa37-19c5-4881-b66f-464140a9f97e container env-test: STEP: delete the pod Oct 30 00:56:50.662: INFO: Waiting for pod pod-configmaps-39f9fa37-19c5-4881-b66f-464140a9f97e to disappear Oct 30 00:56:50.664: INFO: Pod pod-configmaps-39f9fa37-19c5-4881-b66f-464140a9f97e no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:50.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6374" for this suite. • [SLOW TEST:8.080 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:43.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-12cee596-555c-4222-879a-71f439b2c7a7 STEP: Creating a pod to test consume secrets Oct 30 00:56:43.320: INFO: Waiting up to 5m0s for pod "pod-secrets-7ece6aef-1920-4786-9a79-33c4da198a94" in namespace "secrets-8869" to be "Succeeded or Failed" Oct 30 00:56:43.323: INFO: Pod "pod-secrets-7ece6aef-1920-4786-9a79-33c4da198a94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.59087ms Oct 30 00:56:45.328: INFO: Pod "pod-secrets-7ece6aef-1920-4786-9a79-33c4da198a94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007202024s Oct 30 00:56:47.331: INFO: Pod "pod-secrets-7ece6aef-1920-4786-9a79-33c4da198a94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01014632s Oct 30 00:56:49.334: INFO: Pod "pod-secrets-7ece6aef-1920-4786-9a79-33c4da198a94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013617274s Oct 30 00:56:51.338: INFO: Pod "pod-secrets-7ece6aef-1920-4786-9a79-33c4da198a94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018068741s STEP: Saw pod success Oct 30 00:56:51.338: INFO: Pod "pod-secrets-7ece6aef-1920-4786-9a79-33c4da198a94" satisfied condition "Succeeded or Failed" Oct 30 00:56:51.341: INFO: Trying to get logs from node node1 pod pod-secrets-7ece6aef-1920-4786-9a79-33c4da198a94 container secret-volume-test: STEP: delete the pod Oct 30 00:56:51.356: INFO: Waiting for pod pod-secrets-7ece6aef-1920-4786-9a79-33c4da198a94 to disappear Oct 30 00:56:51.358: INFO: Pod pod-secrets-7ece6aef-1920-4786-9a79-33c4da198a94 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:51.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8869" for this suite. • [SLOW TEST:8.087 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":207,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:43.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Oct 30 00:56:43.791: INFO: The status of Pod pod-update-activedeadlineseconds-f466c3bb-543a-4e1e-bfa3-f3f79fe4a010 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:56:45.796: INFO: The status of Pod pod-update-activedeadlineseconds-f466c3bb-543a-4e1e-bfa3-f3f79fe4a010 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:56:47.794: INFO: The status of Pod pod-update-activedeadlineseconds-f466c3bb-543a-4e1e-bfa3-f3f79fe4a010 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:56:49.796: INFO: The status of Pod pod-update-activedeadlineseconds-f466c3bb-543a-4e1e-bfa3-f3f79fe4a010 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 30 00:56:50.313: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f466c3bb-543a-4e1e-bfa3-f3f79fe4a010" Oct 30 00:56:50.313: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f466c3bb-543a-4e1e-bfa3-f3f79fe4a010" in namespace "pods-5160" to be "terminated due to deadline exceeded" Oct 30 00:56:50.315: INFO: Pod "pod-update-activedeadlineseconds-f466c3bb-543a-4e1e-bfa3-f3f79fe4a010": Phase="Running", Reason="", readiness=true. Elapsed: 2.339694ms Oct 30 00:56:52.319: INFO: Pod "pod-update-activedeadlineseconds-f466c3bb-543a-4e1e-bfa3-f3f79fe4a010": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.006135256s Oct 30 00:56:52.319: INFO: Pod "pod-update-activedeadlineseconds-f466c3bb-543a-4e1e-bfa3-f3f79fe4a010" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:52.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5160" for this suite. • [SLOW TEST:8.572 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":412,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:47.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 00:56:47.889: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 00:56:49.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152207, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152207, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152207, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152207, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 00:56:52.908: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:56:52.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3796" for this suite. STEP: Destroying namespace "webhook-3796-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.532 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":13,"skipped":267,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:51.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 30 00:56:51.460: INFO: Waiting up to 5m0s for pod "pod-de8a05b6-e622-476e-85c8-4408a49e3ec3" in namespace "emptydir-8638" to be "Succeeded or Failed" Oct 30 00:56:51.462: INFO: Pod "pod-de8a05b6-e622-476e-85c8-4408a49e3ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.623749ms Oct 30 00:56:53.466: INFO: Pod "pod-de8a05b6-e622-476e-85c8-4408a49e3ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005792546s Oct 30 00:56:55.471: INFO: Pod "pod-de8a05b6-e622-476e-85c8-4408a49e3ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010947795s Oct 30 00:56:57.476: INFO: Pod "pod-de8a05b6-e622-476e-85c8-4408a49e3ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016473213s Oct 30 00:56:59.482: INFO: Pod "pod-de8a05b6-e622-476e-85c8-4408a49e3ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021897993s Oct 30 00:57:01.487: INFO: Pod "pod-de8a05b6-e622-476e-85c8-4408a49e3ec3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.027091762s STEP: Saw pod success Oct 30 00:57:01.487: INFO: Pod "pod-de8a05b6-e622-476e-85c8-4408a49e3ec3" satisfied condition "Succeeded or Failed" Oct 30 00:57:01.490: INFO: Trying to get logs from node node1 pod pod-de8a05b6-e622-476e-85c8-4408a49e3ec3 container test-container: STEP: delete the pod Oct 30 00:57:01.503: INFO: Waiting for pod pod-de8a05b6-e622-476e-85c8-4408a49e3ec3 to disappear Oct 30 00:57:01.506: INFO: Pod pod-de8a05b6-e622-476e-85c8-4408a49e3ec3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:01.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8638" for this suite. • [SLOW TEST:10.124 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":218,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:52.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running Oct 30 00:56:54.400: INFO: running pods: 0 < 1 Oct 30 00:56:56.405: INFO: running pods: 0 < 1 Oct 30 00:56:58.404: INFO: running pods: 0 < 1 Oct 30 00:57:00.406: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:02.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-1047" for this suite. • [SLOW TEST:10.082 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":30,"skipped":425,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:27.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:56:27.215: INFO: created pod Oct 30 00:56:27.215: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-1506" to be "Succeeded or Failed" Oct 30 00:56:27.217: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.360063ms Oct 30 00:56:29.221: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005868307s Oct 30 00:56:31.225: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009940624s Oct 30 00:56:33.229: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014466915s STEP: Saw pod success Oct 30 00:56:33.230: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Oct 30 00:57:03.232: INFO: polling logs Oct 30 00:57:03.247: INFO: Pod logs: 2021/10/30 00:56:30 OK: Got token 2021/10/30 00:56:30 validating with in-cluster discovery 2021/10/30 00:56:30 OK: got issuer https://kubernetes.default.svc.cluster.local 2021/10/30 00:56:30 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-1506:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1635555987, NotBefore:1635555387, IssuedAt:1635555387, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-1506", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"13cb7220-f45c-451e-a3d9-4666e4fa1fbc"}}} 2021/10/30 00:56:30 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2021/10/30 00:56:30 OK: Validated signature on JWT 2021/10/30 00:56:30 OK: Got valid claims from token! 2021/10/30 00:56:30 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-1506:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1635555987, NotBefore:1635555387, IssuedAt:1635555387, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-1506", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"13cb7220-f45c-451e-a3d9-4666e4fa1fbc"}}} Oct 30 00:57:03.247: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:03.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1506" for this suite. • [SLOW TEST:36.081 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":23,"skipped":366,"failed":0} SSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:03.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Oct 30 00:57:03.296: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:03.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4948" for this suite. • ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:53.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:07.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9988" for this suite. • [SLOW TEST:14.049 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":14,"skipped":291,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:02.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:09.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4421" for this suite. • [SLOW TEST:7.039 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":31,"skipped":451,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:01.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-0ebb0dff-f4b7-4978-8794-7366caf4d858 STEP: Creating configMap with name cm-test-opt-upd-2b5f347a-e664-4be0-a20c-233213d74b48 STEP: Creating the pod Oct 30 00:57:01.603: INFO: The status of Pod pod-projected-configmaps-6f05265f-18c7-41ff-a9dd-82484002b90d is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:57:03.606: INFO: The status of Pod pod-projected-configmaps-6f05265f-18c7-41ff-a9dd-82484002b90d is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:57:05.608: INFO: The status of Pod pod-projected-configmaps-6f05265f-18c7-41ff-a9dd-82484002b90d is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:57:07.606: INFO: The status of Pod pod-projected-configmaps-6f05265f-18c7-41ff-a9dd-82484002b90d is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:57:09.607: INFO: The status of Pod pod-projected-configmaps-6f05265f-18c7-41ff-a9dd-82484002b90d is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-0ebb0dff-f4b7-4978-8794-7366caf4d858 STEP: Updating configmap cm-test-opt-upd-2b5f347a-e664-4be0-a20c-233213d74b48 STEP: Creating configMap with name cm-test-opt-create-254c6c9b-21bf-4342-b3d3-a18ff6ccd388 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:12.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9523" for this suite. • [SLOW TEST:10.459 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":235,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:50.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-6650 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 30 00:56:50.776: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 00:56:50.806: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:56:52.811: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:56:54.811: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:56:56.810: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:56:58.810: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:57:00.811: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:57:02.809: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:57:04.812: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:57:06.811: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:57:08.810: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:57:10.813: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:57:12.810: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 30 00:57:12.814: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 30 00:57:16.847: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 30 00:57:16.847: INFO: Going to poll 10.244.3.135 on port 8080 at least 0 times, with a maximum of 34 tries before failing Oct 30 00:57:16.849: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.135:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6650 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:57:16.849: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:57:16.979: INFO: Found all 1 expected endpoints: [netserver-0] Oct 30 00:57:16.979: INFO: Going to poll 10.244.4.238 on port 8080 at least 0 times, with a maximum of 34 tries before failing Oct 30 00:57:16.982: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.238:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6650 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:57:16.982: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:57:17.142: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:17.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6650" for this suite. • [SLOW TEST:26.403 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:17.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:17.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8152" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":23,"skipped":340,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:17.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:17.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9679" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":24,"skipped":350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:12.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 00:57:12.393: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 00:57:14.405: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152232, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152232, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152232, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152232, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 00:57:17.416: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:17.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8761" for this suite. STEP: Destroying namespace "webhook-8761-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.449 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":17,"skipped":252,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:21.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 30 00:56:21.079: INFO: PodSpec: initContainers in spec.initContainers Oct 30 00:57:17.497: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4e769a4b-d18a-491a-a1d7-41247122aca3", GenerateName:"", Namespace:"init-container-473", SelfLink:"", UID:"db415dc8-9d8f-4922-a2fa-76b12ec9474b", ResourceVersion:"68764", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771152181, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"79584948"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.123\"\n ],\n \"mac\": \"5a:ff:26:8f:10:55\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.123\"\n ],\n \"mac\": \"5a:ff:26:8f:10:55\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005828588), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0058285a0)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0058285b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0058285d0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0058285e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005828600)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-dvh7n", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002052b60), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-dvh7n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-dvh7n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-dvh7n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0058fecd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node1", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002e56620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0058fed60)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0058fed80)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0058fed88), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0058fed8c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0058d83d0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152181, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152181, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152181, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152181, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.207", PodIP:"10.244.3.123", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.3.123"}}, StartTime:(*v1.Time)(0xc005828630), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002e56700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002e56770)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://d854777e5322f034e8c2b307acd8b90c3e20d4e2775324b3a3c43d163abbc6eb", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002052d40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002052d20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0058fee0f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:17.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-473" for this suite. • [SLOW TEST:56.447 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":7,"skipped":162,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:06.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-4091 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Oct 30 00:54:06.090: INFO: Found 0 stateful pods, waiting for 3 Oct 30 00:54:16.096: INFO: Found 2 stateful pods, waiting for 3 Oct 30 00:54:26.096: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 30 00:54:26.096: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 30 00:54:26.096: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Oct 30 00:54:26.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4091 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 00:54:26.383: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 00:54:26.383: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 00:54:26.383: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Oct 30 00:54:36.411: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Oct 30 00:54:46.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4091 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:54:47.360: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 30 00:54:47.360: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 00:54:47.360: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 00:55:07.377: INFO: Waiting for StatefulSet statefulset-4091/ss2 to complete update Oct 30 00:55:07.377: INFO: Waiting for Pod statefulset-4091/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 30 00:55:17.384: INFO: Waiting for StatefulSet statefulset-4091/ss2 to complete update STEP: Rolling back to a previous revision Oct 30 00:55:27.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4091 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 00:55:27.615: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 00:55:27.615: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 00:55:27.615: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 00:55:37.643: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Oct 30 00:55:47.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4091 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:55:48.312: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 30 00:55:48.312: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 00:55:48.312: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 00:56:18.328: INFO: Waiting for StatefulSet statefulset-4091/ss2 to complete update Oct 30 00:56:18.328: INFO: Waiting for Pod statefulset-4091/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Oct 30 00:56:28.336: INFO: Waiting for StatefulSet statefulset-4091/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 30 00:56:38.335: INFO: Deleting all statefulset in ns statefulset-4091 Oct 30 00:56:38.337: INFO: Scaling statefulset ss2 to 0 Oct 30 00:57:18.350: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 00:57:18.353: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:18.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4091" for this suite. • [SLOW TEST:192.309 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":8,"skipped":176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:09.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Oct 30 00:57:09.574: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3216 2e996f84-1050-4fac-b425-f412c0ead9b3 68530 0 2021-10-30 00:57:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-30 00:57:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 00:57:09.574: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3216 2e996f84-1050-4fac-b425-f412c0ead9b3 68531 0 2021-10-30 00:57:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-30 00:57:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 00:57:09.575: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3216 2e996f84-1050-4fac-b425-f412c0ead9b3 68533 0 2021-10-30 00:57:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-30 00:57:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Oct 30 00:57:19.598: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3216 2e996f84-1050-4fac-b425-f412c0ead9b3 68826 0 2021-10-30 00:57:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-30 00:57:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 00:57:19.598: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3216 2e996f84-1050-4fac-b425-f412c0ead9b3 68827 0 2021-10-30 00:57:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-30 00:57:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 00:57:19.598: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3216 2e996f84-1050-4fac-b425-f412c0ead9b3 68828 0 2021-10-30 00:57:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-30 00:57:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:19.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3216" for this suite. • [SLOW TEST:10.068 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:17.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-8601c157-ccb1-4b30-8931-a2835ffe9c52 STEP: Creating a pod to test consume secrets Oct 30 00:57:17.449: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4215aad0-1e8e-44b7-b0e4-db82309100e4" in namespace "projected-6006" to be "Succeeded or Failed" Oct 30 00:57:17.451: INFO: Pod "pod-projected-secrets-4215aad0-1e8e-44b7-b0e4-db82309100e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023583ms Oct 30 00:57:19.456: INFO: Pod "pod-projected-secrets-4215aad0-1e8e-44b7-b0e4-db82309100e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006442123s Oct 30 00:57:21.460: INFO: Pod "pod-projected-secrets-4215aad0-1e8e-44b7-b0e4-db82309100e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010905322s Oct 30 00:57:23.464: INFO: Pod "pod-projected-secrets-4215aad0-1e8e-44b7-b0e4-db82309100e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014782858s Oct 30 00:57:25.468: INFO: Pod "pod-projected-secrets-4215aad0-1e8e-44b7-b0e4-db82309100e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018922794s STEP: Saw pod success Oct 30 00:57:25.468: INFO: Pod "pod-projected-secrets-4215aad0-1e8e-44b7-b0e4-db82309100e4" satisfied condition "Succeeded or Failed" Oct 30 00:57:25.472: INFO: Trying to get logs from node node2 pod pod-projected-secrets-4215aad0-1e8e-44b7-b0e4-db82309100e4 container projected-secret-volume-test: STEP: delete the pod Oct 30 00:57:25.488: INFO: Waiting for pod pod-projected-secrets-4215aad0-1e8e-44b7-b0e4-db82309100e4 to disappear Oct 30 00:57:25.490: INFO: Pod pod-projected-secrets-4215aad0-1e8e-44b7-b0e4-db82309100e4 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:25.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6006" for this suite. • [SLOW TEST:8.083 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":396,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:17.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition Oct 30 00:57:17.570: INFO: Waiting up to 5m0s for pod "var-expansion-ece54d18-ac4a-4537-bc8f-e5791632e819" in namespace "var-expansion-7311" to be "Succeeded or Failed" Oct 30 00:57:17.574: INFO: Pod "var-expansion-ece54d18-ac4a-4537-bc8f-e5791632e819": Phase="Pending", Reason="", readiness=false. Elapsed: 3.829792ms Oct 30 00:57:19.579: INFO: Pod "var-expansion-ece54d18-ac4a-4537-bc8f-e5791632e819": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009409658s Oct 30 00:57:21.585: INFO: Pod "var-expansion-ece54d18-ac4a-4537-bc8f-e5791632e819": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014498712s Oct 30 00:57:23.588: INFO: Pod "var-expansion-ece54d18-ac4a-4537-bc8f-e5791632e819": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017808161s Oct 30 00:57:25.591: INFO: Pod "var-expansion-ece54d18-ac4a-4537-bc8f-e5791632e819": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021237117s STEP: Saw pod success Oct 30 00:57:25.591: INFO: Pod "var-expansion-ece54d18-ac4a-4537-bc8f-e5791632e819" satisfied condition "Succeeded or Failed" Oct 30 00:57:25.594: INFO: Trying to get logs from node node2 pod var-expansion-ece54d18-ac4a-4537-bc8f-e5791632e819 container dapi-container: STEP: delete the pod Oct 30 00:57:25.605: INFO: Waiting for pod var-expansion-ece54d18-ac4a-4537-bc8f-e5791632e819 to disappear Oct 30 00:57:25.607: INFO: Pod var-expansion-ece54d18-ac4a-4537-bc8f-e5791632e819 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:25.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7311" for this suite. • [SLOW TEST:8.075 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":279,"failed":0} SSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:25.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:57:25.528: INFO: Creating pod... Oct 30 00:57:25.542: INFO: Pod Quantity: 1 Status: Pending Oct 30 00:57:26.546: INFO: Pod Quantity: 1 Status: Pending Oct 30 00:57:27.546: INFO: Pod Quantity: 1 Status: Pending Oct 30 00:57:28.545: INFO: Pod Quantity: 1 Status: Pending Oct 30 00:57:29.546: INFO: Pod Status: Running Oct 30 00:57:29.546: INFO: Creating service... Oct 30 00:57:29.554: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-656/pods/agnhost/proxy/some/path/with/DELETE Oct 30 00:57:29.557: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Oct 30 00:57:29.557: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-656/pods/agnhost/proxy/some/path/with/GET Oct 30 00:57:29.559: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Oct 30 00:57:29.559: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-656/pods/agnhost/proxy/some/path/with/HEAD Oct 30 00:57:29.562: INFO: http.Client request:HEAD | StatusCode:200 Oct 30 00:57:29.562: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-656/pods/agnhost/proxy/some/path/with/OPTIONS Oct 30 00:57:29.565: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Oct 30 00:57:29.565: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-656/pods/agnhost/proxy/some/path/with/PATCH Oct 30 00:57:29.567: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Oct 30 00:57:29.567: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-656/pods/agnhost/proxy/some/path/with/POST Oct 30 00:57:29.569: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Oct 30 00:57:29.569: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-656/pods/agnhost/proxy/some/path/with/PUT Oct 30 00:57:29.572: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Oct 30 00:57:29.572: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-656/services/test-service/proxy/some/path/with/DELETE Oct 30 00:57:29.574: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Oct 30 00:57:29.575: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-656/services/test-service/proxy/some/path/with/GET Oct 30 00:57:29.578: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Oct 30 00:57:29.578: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-656/services/test-service/proxy/some/path/with/HEAD Oct 30 00:57:29.581: INFO: http.Client request:HEAD | StatusCode:200 Oct 30 00:57:29.581: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-656/services/test-service/proxy/some/path/with/OPTIONS Oct 30 00:57:29.584: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Oct 30 00:57:29.584: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-656/services/test-service/proxy/some/path/with/PATCH Oct 30 00:57:29.587: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Oct 30 00:57:29.587: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-656/services/test-service/proxy/some/path/with/POST Oct 30 00:57:29.590: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Oct 30 00:57:29.590: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-656/services/test-service/proxy/some/path/with/PUT Oct 30 00:57:29.593: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:29.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-656" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":26,"skipped":397,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:29.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:29.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4966" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:17.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:33.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7983" for this suite. • [SLOW TEST:16.112 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":8,"skipped":167,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":27,"skipped":427,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:29.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 00:57:29.768: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8d312f7-2097-4950-9888-4c45fbef0ed6" in namespace "downward-api-545" to be "Succeeded or Failed" Oct 30 00:57:29.770: INFO: Pod "downwardapi-volume-e8d312f7-2097-4950-9888-4c45fbef0ed6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.434155ms Oct 30 00:57:31.774: INFO: Pod "downwardapi-volume-e8d312f7-2097-4950-9888-4c45fbef0ed6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005810524s Oct 30 00:57:33.777: INFO: Pod "downwardapi-volume-e8d312f7-2097-4950-9888-4c45fbef0ed6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008813484s STEP: Saw pod success Oct 30 00:57:33.777: INFO: Pod "downwardapi-volume-e8d312f7-2097-4950-9888-4c45fbef0ed6" satisfied condition "Succeeded or Failed" Oct 30 00:57:33.779: INFO: Trying to get logs from node node1 pod downwardapi-volume-e8d312f7-2097-4950-9888-4c45fbef0ed6 container client-container: STEP: delete the pod Oct 30 00:57:33.798: INFO: Waiting for pod downwardapi-volume-e8d312f7-2097-4950-9888-4c45fbef0ed6 to disappear Oct 30 00:57:33.800: INFO: Pod downwardapi-volume-e8d312f7-2097-4950-9888-4c45fbef0ed6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:33.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-545" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":427,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:33.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 30 00:57:33.919: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 30 00:57:33.922: INFO: starting watch STEP: patching STEP: updating Oct 30 00:57:33.932: INFO: waiting for watch events with expected annotations Oct 30 00:57:33.932: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:33.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-1328" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":29,"skipped":471,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:55:00.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-00176d17-9bb9-4e06-870f-696107c114d2 in namespace container-probe-9605 Oct 30 00:55:04.065: INFO: Started pod liveness-00176d17-9bb9-4e06-870f-696107c114d2 in namespace container-probe-9605 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 00:55:04.067: INFO: Initial restart count of pod liveness-00176d17-9bb9-4e06-870f-696107c114d2 is 0 Oct 30 00:55:22.103: INFO: Restart count of pod container-probe-9605/liveness-00176d17-9bb9-4e06-870f-696107c114d2 is now 1 (18.035885279s elapsed) Oct 30 00:55:46.148: INFO: Restart count of pod container-probe-9605/liveness-00176d17-9bb9-4e06-870f-696107c114d2 is now 2 (42.080485175s elapsed) Oct 30 00:56:02.182: INFO: Restart count of pod container-probe-9605/liveness-00176d17-9bb9-4e06-870f-696107c114d2 is now 3 (58.114574258s elapsed) Oct 30 00:56:22.224: INFO: Restart count of pod container-probe-9605/liveness-00176d17-9bb9-4e06-870f-696107c114d2 is now 4 (1m18.157176121s elapsed) Oct 30 00:57:36.401: INFO: Restart count of pod container-probe-9605/liveness-00176d17-9bb9-4e06-870f-696107c114d2 is now 5 (2m32.333786535s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:36.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9605" for this suite. • [SLOW TEST:156.391 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":221,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:33.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 30 00:57:34.002: INFO: Waiting up to 5m0s for pod "downward-api-df9d21e1-fa63-40fe-a727-9b35c23a26e8" in namespace "downward-api-9830" to be "Succeeded or Failed" Oct 30 00:57:34.004: INFO: Pod "downward-api-df9d21e1-fa63-40fe-a727-9b35c23a26e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078698ms Oct 30 00:57:36.009: INFO: Pod "downward-api-df9d21e1-fa63-40fe-a727-9b35c23a26e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006568418s Oct 30 00:57:38.013: INFO: Pod "downward-api-df9d21e1-fa63-40fe-a727-9b35c23a26e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011331986s STEP: Saw pod success Oct 30 00:57:38.013: INFO: Pod "downward-api-df9d21e1-fa63-40fe-a727-9b35c23a26e8" satisfied condition "Succeeded or Failed" Oct 30 00:57:38.016: INFO: Trying to get logs from node node1 pod downward-api-df9d21e1-fa63-40fe-a727-9b35c23a26e8 container dapi-container: STEP: delete the pod Oct 30 00:57:38.029: INFO: Waiting for pod downward-api-df9d21e1-fa63-40fe-a727-9b35c23a26e8 to disappear Oct 30 00:57:38.031: INFO: Pod downward-api-df9d21e1-fa63-40fe-a727-9b35c23a26e8 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:38.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9830" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":477,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:33.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-655842ee-f479-43b5-8262-587d3d7507e8 STEP: Creating a pod to test consume configMaps Oct 30 00:57:33.733: INFO: Waiting up to 5m0s for pod "pod-configmaps-207f3c22-d9ee-460a-89de-708572342a81" in namespace "configmap-1953" to be "Succeeded or Failed" Oct 30 00:57:33.735: INFO: Pod "pod-configmaps-207f3c22-d9ee-460a-89de-708572342a81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094202ms Oct 30 00:57:35.739: INFO: Pod "pod-configmaps-207f3c22-d9ee-460a-89de-708572342a81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005777712s Oct 30 00:57:37.743: INFO: Pod "pod-configmaps-207f3c22-d9ee-460a-89de-708572342a81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009550042s Oct 30 00:57:39.746: INFO: Pod "pod-configmaps-207f3c22-d9ee-460a-89de-708572342a81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012758964s STEP: Saw pod success Oct 30 00:57:39.746: INFO: Pod "pod-configmaps-207f3c22-d9ee-460a-89de-708572342a81" satisfied condition "Succeeded or Failed" Oct 30 00:57:39.749: INFO: Trying to get logs from node node1 pod pod-configmaps-207f3c22-d9ee-460a-89de-708572342a81 container agnhost-container: STEP: delete the pod Oct 30 00:57:39.875: INFO: Waiting for pod pod-configmaps-207f3c22-d9ee-460a-89de-708572342a81 to disappear Oct 30 00:57:39.878: INFO: Pod pod-configmaps-207f3c22-d9ee-460a-89de-708572342a81 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:39.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1953" for this suite. • [SLOW TEST:6.188 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":206,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:36.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 30 00:57:36.870: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 30 00:57:38.879: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152256, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152256, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152256, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152256, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 00:57:41.888: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:57:41.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:49.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3877" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.588 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":11,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:39.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-3830 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3830 to expose endpoints map[] Oct 30 00:57:39.934: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Oct 30 00:57:40.941: INFO: successfully validated that service multi-endpoint-test in namespace services-3830 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-3830 Oct 30 00:57:40.954: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:57:42.961: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:57:44.958: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:57:46.957: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3830 to expose endpoints map[pod1:[100]] Oct 30 00:57:46.967: INFO: successfully validated that service multi-endpoint-test in namespace services-3830 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-3830 Oct 30 00:57:46.979: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:57:48.982: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:57:50.984: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3830 to expose endpoints map[pod1:[100] pod2:[101]] Oct 30 00:57:50.997: INFO: successfully validated that service multi-endpoint-test in namespace services-3830 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-3830 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3830 to expose endpoints map[pod2:[101]] Oct 30 00:57:51.011: INFO: successfully validated that service multi-endpoint-test in namespace services-3830 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-3830 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3830 to expose endpoints map[] Oct 30 00:57:51.022: INFO: successfully validated that service multi-endpoint-test in namespace services-3830 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:51.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3830" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:11.130 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":10,"skipped":217,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:51.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Oct 30 00:57:51.075: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-8815 da6f476d-fdb6-4278-bd46-2378cbc820de 69650 0 2021-10-30 00:57:51 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-10-30 00:57:51 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tth8g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tth8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 00:57:51.079: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:57:53.083: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:57:55.085: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Oct 30 00:57:55.085: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8815 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:57:55.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Oct 30 00:57:55.191: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8815 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:57:55.191: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:57:55.286: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:55.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8815" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":11,"skipped":220,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:55.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 30 00:57:55.422: INFO: Waiting up to 5m0s for pod "pod-5dadb605-d0b9-4d36-b771-83d29b236a65" in namespace "emptydir-6203" to be "Succeeded or Failed" Oct 30 00:57:55.424: INFO: Pod "pod-5dadb605-d0b9-4d36-b771-83d29b236a65": Phase="Pending", Reason="", readiness=false. Elapsed: 1.907739ms Oct 30 00:57:57.428: INFO: Pod "pod-5dadb605-d0b9-4d36-b771-83d29b236a65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005628314s Oct 30 00:57:59.433: INFO: Pod "pod-5dadb605-d0b9-4d36-b771-83d29b236a65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010382237s STEP: Saw pod success Oct 30 00:57:59.433: INFO: Pod "pod-5dadb605-d0b9-4d36-b771-83d29b236a65" satisfied condition "Succeeded or Failed" Oct 30 00:57:59.435: INFO: Trying to get logs from node node1 pod pod-5dadb605-d0b9-4d36-b771-83d29b236a65 container test-container: STEP: delete the pod Oct 30 00:57:59.453: INFO: Waiting for pod pod-5dadb605-d0b9-4d36-b771-83d29b236a65 to disappear Oct 30 00:57:59.455: INFO: Pod pod-5dadb605-d0b9-4d36-b771-83d29b236a65 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:57:59.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6203" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":269,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:38.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-4111 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 30 00:57:38.081: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 00:57:38.112: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:57:40.115: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:57:42.116: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:57:44.117: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:57:46.118: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:57:48.119: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:57:50.117: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:57:52.116: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:57:54.116: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:57:56.117: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:57:58.118: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 30 00:57:58.123: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 30 00:58:00.127: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 30 00:58:06.149: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 30 00:58:06.149: INFO: Breadth first check of 10.244.3.151 on host 10.10.190.207... Oct 30 00:58:06.152: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.158:9080/dial?request=hostname&protocol=http&host=10.244.3.151&port=8080&tries=1'] Namespace:pod-network-test-4111 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:58:06.152: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:58:06.256: INFO: Waiting for responses: map[] Oct 30 00:58:06.256: INFO: reached 10.244.3.151 after 0/1 tries Oct 30 00:58:06.256: INFO: Breadth first check of 10.244.4.5 on host 10.10.190.208... Oct 30 00:58:06.259: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.158:9080/dial?request=hostname&protocol=http&host=10.244.4.5&port=8080&tries=1'] Namespace:pod-network-test-4111 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:58:06.259: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:58:06.342: INFO: Waiting for responses: map[] Oct 30 00:58:06.342: INFO: reached 10.244.4.5 after 0/1 tries Oct 30 00:58:06.342: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:06.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4111" for this suite. • [SLOW TEST:28.292 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":487,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:07.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:07.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7014" for this suite. • [SLOW TEST:60.041 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":306,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:06.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:10.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6910" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:50.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-7vj8r in namespace proxy-3446 I1030 00:57:50.139082 31 runners.go:190] Created replication controller with name: proxy-service-7vj8r, namespace: proxy-3446, replica count: 1 I1030 00:57:51.190064 31 runners.go:190] proxy-service-7vj8r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:57:52.190462 31 runners.go:190] proxy-service-7vj8r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:57:53.191639 31 runners.go:190] proxy-service-7vj8r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 00:57:54.192250 31 runners.go:190] proxy-service-7vj8r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 00:57:55.192417 31 runners.go:190] proxy-service-7vj8r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 00:57:56.192608 31 runners.go:190] proxy-service-7vj8r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 00:57:57.193114 31 runners.go:190] proxy-service-7vj8r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 00:57:58.195827 31 runners.go:190] proxy-service-7vj8r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 00:57:59.197604 31 runners.go:190] proxy-service-7vj8r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 00:58:00.198468 31 runners.go:190] proxy-service-7vj8r Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 00:58:00.200: INFO: setup took 10.071497507s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Oct 30 00:58:00.203: INFO: (0) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:1080/proxy/: ... (200; 2.324207ms) Oct 30 00:58:00.203: INFO: (0) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.49796ms) Oct 30 00:58:00.203: INFO: (0) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.702639ms) Oct 30 00:58:00.204: INFO: (0) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:1080/proxy/: test<... (200; 3.257612ms) Oct 30 00:58:00.204: INFO: (0) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 3.39333ms) Oct 30 00:58:00.208: INFO: (0) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 6.932851ms) Oct 30 00:58:00.208: INFO: (0) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 7.239046ms) Oct 30 00:58:00.208: INFO: (0) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 7.092878ms) Oct 30 00:58:00.208: INFO: (0) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname1/proxy/: foo (200; 7.035835ms) Oct 30 00:58:00.208: INFO: (0) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 7.01852ms) Oct 30 00:58:00.208: INFO: (0) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 7.177607ms) Oct 30 00:58:00.209: INFO: (0) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: test (200; 3.051993ms) Oct 30 00:58:00.215: INFO: (1) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:1080/proxy/: test<... (200; 3.07211ms) Oct 30 00:58:00.215: INFO: (1) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 3.067547ms) Oct 30 00:58:00.215: INFO: (1) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:460/proxy/: tls baz (200; 3.163123ms) Oct 30 00:58:00.215: INFO: (1) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 3.014476ms) Oct 30 00:58:00.216: INFO: (1) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 3.535042ms) Oct 30 00:58:00.216: INFO: (1) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:1080/proxy/: ... (200; 3.397183ms) Oct 30 00:58:00.216: INFO: (1) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: ... (200; 2.37413ms) Oct 30 00:58:00.219: INFO: (2) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.424813ms) Oct 30 00:58:00.219: INFO: (2) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.555001ms) Oct 30 00:58:00.220: INFO: (2) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 2.722969ms) Oct 30 00:58:00.220: INFO: (2) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: test<... (200; 2.994818ms) Oct 30 00:58:00.220: INFO: (2) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 3.439923ms) Oct 30 00:58:00.220: INFO: (2) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname1/proxy/: tls baz (200; 3.608609ms) Oct 30 00:58:00.220: INFO: (2) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 3.550026ms) Oct 30 00:58:00.220: INFO: (2) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname1/proxy/: foo (200; 3.515476ms) Oct 30 00:58:00.221: INFO: (2) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 3.840945ms) Oct 30 00:58:00.221: INFO: (2) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 4.214209ms) Oct 30 00:58:00.223: INFO: (3) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: test<... (200; 2.300106ms) Oct 30 00:58:00.224: INFO: (3) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:462/proxy/: tls qux (200; 2.430436ms) Oct 30 00:58:00.224: INFO: (3) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.266226ms) Oct 30 00:58:00.224: INFO: (3) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.294929ms) Oct 30 00:58:00.224: INFO: (3) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.364793ms) Oct 30 00:58:00.224: INFO: (3) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:1080/proxy/: ... (200; 2.628501ms) Oct 30 00:58:00.224: INFO: (3) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.687017ms) Oct 30 00:58:00.224: INFO: (3) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 2.919317ms) Oct 30 00:58:00.225: INFO: (3) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:460/proxy/: tls baz (200; 3.262829ms) Oct 30 00:58:00.225: INFO: (3) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname1/proxy/: foo (200; 3.454876ms) Oct 30 00:58:00.225: INFO: (3) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname1/proxy/: tls baz (200; 3.584691ms) Oct 30 00:58:00.225: INFO: (3) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 3.448304ms) Oct 30 00:58:00.225: INFO: (3) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 3.348229ms) Oct 30 00:58:00.225: INFO: (3) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 3.778785ms) Oct 30 00:58:00.225: INFO: (3) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 4.008481ms) Oct 30 00:58:00.227: INFO: (4) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:462/proxy/: tls qux (200; 2.141323ms) Oct 30 00:58:00.228: INFO: (4) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.194931ms) Oct 30 00:58:00.228: INFO: (4) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.139514ms) Oct 30 00:58:00.228: INFO: (4) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.500198ms) Oct 30 00:58:00.228: INFO: (4) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:1080/proxy/: test<... (200; 2.491533ms) Oct 30 00:58:00.228: INFO: (4) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:1080/proxy/: ... (200; 2.760808ms) Oct 30 00:58:00.229: INFO: (4) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 3.29252ms) Oct 30 00:58:00.229: INFO: (4) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 3.16501ms) Oct 30 00:58:00.229: INFO: (4) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname1/proxy/: foo (200; 3.337286ms) Oct 30 00:58:00.229: INFO: (4) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:460/proxy/: tls baz (200; 3.367181ms) Oct 30 00:58:00.229: INFO: (4) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: test<... (200; 2.867046ms) Oct 30 00:58:00.233: INFO: (5) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 3.163013ms) Oct 30 00:58:00.233: INFO: (5) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 2.97657ms) Oct 30 00:58:00.234: INFO: (5) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:460/proxy/: tls baz (200; 3.123351ms) Oct 30 00:58:00.234: INFO: (5) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: ... (200; 3.557596ms) Oct 30 00:58:00.234: INFO: (5) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 3.64921ms) Oct 30 00:58:00.234: INFO: (5) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname1/proxy/: tls baz (200; 3.716044ms) Oct 30 00:58:00.234: INFO: (5) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 3.851885ms) Oct 30 00:58:00.234: INFO: (5) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 3.778421ms) Oct 30 00:58:00.234: INFO: (5) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname1/proxy/: foo (200; 4.013277ms) Oct 30 00:58:00.237: INFO: (6) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:1080/proxy/: ... (200; 2.318212ms) Oct 30 00:58:00.237: INFO: (6) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.288315ms) Oct 30 00:58:00.237: INFO: (6) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.262675ms) Oct 30 00:58:00.237: INFO: (6) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.203991ms) Oct 30 00:58:00.237: INFO: (6) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:460/proxy/: tls baz (200; 2.480111ms) Oct 30 00:58:00.237: INFO: (6) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:1080/proxy/: test<... (200; 2.660946ms) Oct 30 00:58:00.237: INFO: (6) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: test (200; 3.482033ms) Oct 30 00:58:00.238: INFO: (6) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 3.442992ms) Oct 30 00:58:00.238: INFO: (6) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname1/proxy/: foo (200; 3.699908ms) Oct 30 00:58:00.239: INFO: (6) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname1/proxy/: tls baz (200; 4.031459ms) Oct 30 00:58:00.239: INFO: (6) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 3.93634ms) Oct 30 00:58:00.239: INFO: (6) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 4.220554ms) Oct 30 00:58:00.239: INFO: (6) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 4.385858ms) Oct 30 00:58:00.241: INFO: (7) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:1080/proxy/: test<... (200; 2.081061ms) Oct 30 00:58:00.241: INFO: (7) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.044656ms) Oct 30 00:58:00.241: INFO: (7) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.203439ms) Oct 30 00:58:00.241: INFO: (7) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:462/proxy/: tls qux (200; 2.359685ms) Oct 30 00:58:00.242: INFO: (7) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 2.843146ms) Oct 30 00:58:00.242: INFO: (7) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:460/proxy/: tls baz (200; 2.96657ms) Oct 30 00:58:00.242: INFO: (7) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: ... (200; 3.188593ms) Oct 30 00:58:00.242: INFO: (7) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 3.279315ms) Oct 30 00:58:00.242: INFO: (7) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname1/proxy/: tls baz (200; 3.190291ms) Oct 30 00:58:00.242: INFO: (7) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 3.352801ms) Oct 30 00:58:00.243: INFO: (7) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname1/proxy/: foo (200; 3.892694ms) Oct 30 00:58:00.243: INFO: (7) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 3.874243ms) Oct 30 00:58:00.243: INFO: (7) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 3.950077ms) Oct 30 00:58:00.243: INFO: (7) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 4.170621ms) Oct 30 00:58:00.246: INFO: (8) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.278058ms) Oct 30 00:58:00.246: INFO: (8) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.243288ms) Oct 30 00:58:00.246: INFO: (8) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.540449ms) Oct 30 00:58:00.246: INFO: (8) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 2.410193ms) Oct 30 00:58:00.246: INFO: (8) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:1080/proxy/: test<... (200; 2.385272ms) Oct 30 00:58:00.247: INFO: (8) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 3.216902ms) Oct 30 00:58:00.247: INFO: (8) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 3.323422ms) Oct 30 00:58:00.247: INFO: (8) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:1080/proxy/: ... (200; 3.519606ms) Oct 30 00:58:00.247: INFO: (8) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:460/proxy/: tls baz (200; 3.663281ms) Oct 30 00:58:00.247: INFO: (8) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:462/proxy/: tls qux (200; 3.605114ms) Oct 30 00:58:00.247: INFO: (8) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: test (200; 2.477593ms) Oct 30 00:58:00.251: INFO: (9) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:1080/proxy/: test<... (200; 2.791806ms) Oct 30 00:58:00.252: INFO: (9) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.788905ms) Oct 30 00:58:00.252: INFO: (9) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: ... (200; 3.005652ms) Oct 30 00:58:00.252: INFO: (9) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:462/proxy/: tls qux (200; 3.3542ms) Oct 30 00:58:00.252: INFO: (9) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 3.694249ms) Oct 30 00:58:00.254: INFO: (9) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 5.354435ms) Oct 30 00:58:00.254: INFO: (9) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 5.479089ms) Oct 30 00:58:00.254: INFO: (9) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname1/proxy/: foo (200; 5.436871ms) Oct 30 00:58:00.254: INFO: (9) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname1/proxy/: tls baz (200; 5.490665ms) Oct 30 00:58:00.254: INFO: (9) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 5.415367ms) Oct 30 00:58:00.257: INFO: (10) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:462/proxy/: tls qux (200; 2.465138ms) Oct 30 00:58:00.257: INFO: (10) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:1080/proxy/: ... (200; 2.465597ms) Oct 30 00:58:00.257: INFO: (10) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.341791ms) Oct 30 00:58:00.257: INFO: (10) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:460/proxy/: tls baz (200; 2.725144ms) Oct 30 00:58:00.257: INFO: (10) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: test<... (200; 2.844169ms) Oct 30 00:58:00.258: INFO: (10) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname1/proxy/: tls baz (200; 3.310778ms) Oct 30 00:58:00.258: INFO: (10) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 3.427754ms) Oct 30 00:58:00.258: INFO: (10) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 3.274924ms) Oct 30 00:58:00.258: INFO: (10) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 3.315183ms) Oct 30 00:58:00.258: INFO: (10) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 3.62726ms) Oct 30 00:58:00.258: INFO: (10) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 3.627349ms) Oct 30 00:58:00.258: INFO: (10) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname1/proxy/: foo (200; 3.680201ms) Oct 30 00:58:00.258: INFO: (10) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 3.627524ms) Oct 30 00:58:00.258: INFO: (10) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 3.967967ms) Oct 30 00:58:00.261: INFO: (11) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:460/proxy/: tls baz (200; 2.206216ms) Oct 30 00:58:00.261: INFO: (11) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.162219ms) Oct 30 00:58:00.261: INFO: (11) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:462/proxy/: tls qux (200; 2.39649ms) Oct 30 00:58:00.261: INFO: (11) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: test<... (200; 2.802794ms) Oct 30 00:58:00.262: INFO: (11) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.778879ms) Oct 30 00:58:00.262: INFO: (11) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.760119ms) Oct 30 00:58:00.262: INFO: (11) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname1/proxy/: foo (200; 3.351246ms) Oct 30 00:58:00.262: INFO: (11) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 3.421749ms) Oct 30 00:58:00.262: INFO: (11) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:1080/proxy/: ... (200; 3.325567ms) Oct 30 00:58:00.262: INFO: (11) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 3.324ms) Oct 30 00:58:00.263: INFO: (11) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 3.817997ms) Oct 30 00:58:00.263: INFO: (11) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 3.987701ms) Oct 30 00:58:00.263: INFO: (11) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 3.882387ms) Oct 30 00:58:00.263: INFO: (11) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname1/proxy/: tls baz (200; 4.189171ms) Oct 30 00:58:00.263: INFO: (11) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 4.281705ms) Oct 30 00:58:00.266: INFO: (12) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.523084ms) Oct 30 00:58:00.266: INFO: (12) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: test<... (200; 2.923119ms) Oct 30 00:58:00.266: INFO: (12) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 3.178045ms) Oct 30 00:58:00.267: INFO: (12) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 3.286215ms) Oct 30 00:58:00.267: INFO: (12) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 3.421402ms) Oct 30 00:58:00.267: INFO: (12) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:1080/proxy/: ... (200; 3.360732ms) Oct 30 00:58:00.267: INFO: (12) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 3.448317ms) Oct 30 00:58:00.267: INFO: (12) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 3.522946ms) Oct 30 00:58:00.267: INFO: (12) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 3.49647ms) Oct 30 00:58:00.267: INFO: (12) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname1/proxy/: tls baz (200; 3.74084ms) Oct 30 00:58:00.267: INFO: (12) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 3.650644ms) Oct 30 00:58:00.269: INFO: (13) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 2.447179ms) Oct 30 00:58:00.269: INFO: (13) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:462/proxy/: tls qux (200; 2.350356ms) Oct 30 00:58:00.270: INFO: (13) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:460/proxy/: tls baz (200; 2.334337ms) Oct 30 00:58:00.270: INFO: (13) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:1080/proxy/: test<... (200; 2.47234ms) Oct 30 00:58:00.270: INFO: (13) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.561278ms) Oct 30 00:58:00.270: INFO: (13) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.689054ms) Oct 30 00:58:00.270: INFO: (13) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: ... (200; 3.042105ms) Oct 30 00:58:00.270: INFO: (13) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 3.060436ms) Oct 30 00:58:00.270: INFO: (13) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 3.102244ms) Oct 30 00:58:00.271: INFO: (13) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 3.524322ms) Oct 30 00:58:00.271: INFO: (13) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 3.642423ms) Oct 30 00:58:00.271: INFO: (13) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 3.650607ms) Oct 30 00:58:00.271: INFO: (13) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname1/proxy/: tls baz (200; 3.780055ms) Oct 30 00:58:00.273: INFO: (14) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.191094ms) Oct 30 00:58:00.273: INFO: (14) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 2.298289ms) Oct 30 00:58:00.274: INFO: (14) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:460/proxy/: tls baz (200; 2.566007ms) Oct 30 00:58:00.274: INFO: (14) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:1080/proxy/: ... (200; 2.804591ms) Oct 30 00:58:00.274: INFO: (14) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.849429ms) Oct 30 00:58:00.274: INFO: (14) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:462/proxy/: tls qux (200; 2.855553ms) Oct 30 00:58:00.274: INFO: (14) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.940152ms) Oct 30 00:58:00.274: INFO: (14) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.974904ms) Oct 30 00:58:00.274: INFO: (14) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 3.153628ms) Oct 30 00:58:00.274: INFO: (14) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: test<... (200; 3.105828ms) Oct 30 00:58:00.275: INFO: (14) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname1/proxy/: foo (200; 3.633497ms) Oct 30 00:58:00.275: INFO: (14) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 3.679395ms) Oct 30 00:58:00.275: INFO: (14) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname1/proxy/: tls baz (200; 3.957493ms) Oct 30 00:58:00.275: INFO: (14) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 4.042001ms) Oct 30 00:58:00.275: INFO: (14) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 4.022443ms) Oct 30 00:58:00.277: INFO: (15) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: test<... (200; 2.620323ms) Oct 30 00:58:00.279: INFO: (15) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 3.084358ms) Oct 30 00:58:00.279: INFO: (15) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:1080/proxy/: ... (200; 3.544323ms) Oct 30 00:58:00.279: INFO: (15) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 3.477623ms) Oct 30 00:58:00.279: INFO: (15) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 3.477298ms) Oct 30 00:58:00.279: INFO: (15) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 3.5573ms) Oct 30 00:58:00.279: INFO: (15) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:460/proxy/: tls baz (200; 3.589247ms) Oct 30 00:58:00.279: INFO: (15) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 3.820078ms) Oct 30 00:58:00.280: INFO: (15) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 3.983053ms) Oct 30 00:58:00.280: INFO: (15) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname1/proxy/: tls baz (200; 4.183981ms) Oct 30 00:58:00.280: INFO: (15) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname1/proxy/: foo (200; 4.194838ms) Oct 30 00:58:00.282: INFO: (16) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:1080/proxy/: test<... (200; 2.558433ms) Oct 30 00:58:00.282: INFO: (16) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.60063ms) Oct 30 00:58:00.282: INFO: (16) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:460/proxy/: tls baz (200; 2.517732ms) Oct 30 00:58:00.283: INFO: (16) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.628369ms) Oct 30 00:58:00.283: INFO: (16) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:1080/proxy/: ... (200; 2.572994ms) Oct 30 00:58:00.283: INFO: (16) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 2.651129ms) Oct 30 00:58:00.283: INFO: (16) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:462/proxy/: tls qux (200; 2.580573ms) Oct 30 00:58:00.283: INFO: (16) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.836982ms) Oct 30 00:58:00.283: INFO: (16) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.920909ms) Oct 30 00:58:00.283: INFO: (16) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: test (200; 2.13286ms) Oct 30 00:58:00.287: INFO: (17) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.120444ms) Oct 30 00:58:00.287: INFO: (17) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.391516ms) Oct 30 00:58:00.287: INFO: (17) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.341462ms) Oct 30 00:58:00.287: INFO: (17) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.680177ms) Oct 30 00:58:00.287: INFO: (17) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:1080/proxy/: test<... (200; 2.817423ms) Oct 30 00:58:00.287: INFO: (17) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 2.90226ms) Oct 30 00:58:00.288: INFO: (17) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: ... (200; 3.20403ms) Oct 30 00:58:00.288: INFO: (17) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 3.146609ms) Oct 30 00:58:00.288: INFO: (17) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname1/proxy/: foo (200; 3.373273ms) Oct 30 00:58:00.288: INFO: (17) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname1/proxy/: tls baz (200; 3.544166ms) Oct 30 00:58:00.289: INFO: (17) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 3.960968ms) Oct 30 00:58:00.289: INFO: (17) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 3.975588ms) Oct 30 00:58:00.291: INFO: (18) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.056572ms) Oct 30 00:58:00.291: INFO: (18) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.18613ms) Oct 30 00:58:00.291: INFO: (18) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: ... (200; 2.622798ms) Oct 30 00:58:00.292: INFO: (18) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.616542ms) Oct 30 00:58:00.292: INFO: (18) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:462/proxy/: tls qux (200; 2.605899ms) Oct 30 00:58:00.292: INFO: (18) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 2.933616ms) Oct 30 00:58:00.292: INFO: (18) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname1/proxy/: tls baz (200; 3.232523ms) Oct 30 00:58:00.292: INFO: (18) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:1080/proxy/: test<... (200; 3.001158ms) Oct 30 00:58:00.292: INFO: (18) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname1/proxy/: foo (200; 3.170182ms) Oct 30 00:58:00.292: INFO: (18) /api/v1/namespaces/proxy-3446/services/proxy-service-7vj8r:portname2/proxy/: bar (200; 3.48579ms) Oct 30 00:58:00.293: INFO: (18) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 3.487793ms) Oct 30 00:58:00.293: INFO: (18) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname2/proxy/: bar (200; 3.452153ms) Oct 30 00:58:00.293: INFO: (18) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 3.822019ms) Oct 30 00:58:00.295: INFO: (19) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 1.968647ms) Oct 30 00:58:00.295: INFO: (19) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:460/proxy/: tls baz (200; 2.185509ms) Oct 30 00:58:00.296: INFO: (19) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7/proxy/: test (200; 2.591507ms) Oct 30 00:58:00.296: INFO: (19) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:162/proxy/: bar (200; 2.68345ms) Oct 30 00:58:00.296: INFO: (19) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.67ms) Oct 30 00:58:00.296: INFO: (19) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:462/proxy/: tls qux (200; 2.780215ms) Oct 30 00:58:00.296: INFO: (19) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:1080/proxy/: test<... (200; 2.738675ms) Oct 30 00:58:00.296: INFO: (19) /api/v1/namespaces/proxy-3446/pods/proxy-service-7vj8r-nrpj7:160/proxy/: foo (200; 2.695141ms) Oct 30 00:58:00.296: INFO: (19) /api/v1/namespaces/proxy-3446/services/http:proxy-service-7vj8r:portname1/proxy/: foo (200; 3.117879ms) Oct 30 00:58:00.296: INFO: (19) /api/v1/namespaces/proxy-3446/pods/http:proxy-service-7vj8r-nrpj7:1080/proxy/: ... (200; 3.12892ms) Oct 30 00:58:00.296: INFO: (19) /api/v1/namespaces/proxy-3446/services/https:proxy-service-7vj8r:tlsportname2/proxy/: tls qux (200; 3.395991ms) Oct 30 00:58:00.296: INFO: (19) /api/v1/namespaces/proxy-3446/pods/https:proxy-service-7vj8r-nrpj7:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1030 00:57:13.412328 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:58:15.428: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 30 00:58:15.428: INFO: Deleting pod "simpletest-rc-to-be-deleted-2chjf" in namespace "gc-6473" Oct 30 00:58:15.435: INFO: Deleting pod "simpletest-rc-to-be-deleted-2snw6" in namespace "gc-6473" Oct 30 00:58:15.440: INFO: Deleting pod "simpletest-rc-to-be-deleted-5pfng" in namespace "gc-6473" Oct 30 00:58:15.448: INFO: Deleting pod "simpletest-rc-to-be-deleted-66mpz" in namespace "gc-6473" Oct 30 00:58:15.453: INFO: Deleting pod "simpletest-rc-to-be-deleted-6nxrt" in namespace "gc-6473" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:15.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6473" for this suite. • [SLOW TEST:72.146 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:18.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1030 00:57:19.521669 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:58:21.540: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:21.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3090" for this suite. • [SLOW TEST:63.094 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":9,"skipped":224,"failed":0} S ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":25,"skipped":369,"failed":0} [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:15.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:21.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2562" for this suite. • [SLOW TEST:6.082 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":369,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:21.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-5781dd16-a540-4fb3-a00f-e72390e7a2d4 STEP: Creating the pod Oct 30 00:58:21.597: INFO: The status of Pod pod-configmaps-0b33639b-e014-4ab2-8325-a1267c82887c is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:23.600: INFO: The status of Pod pod-configmaps-0b33639b-e014-4ab2-8325-a1267c82887c is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:25.602: INFO: The status of Pod pod-configmaps-0b33639b-e014-4ab2-8325-a1267c82887c is Running (Ready = true) STEP: Updating configmap configmap-test-upd-5781dd16-a540-4fb3-a00f-e72390e7a2d4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:27.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4909" for this suite. • [SLOW TEST:6.080 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":369,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:21.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-83366f20-51e8-4c64-aa37-a1d04de78569 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:27.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1534" for this suite. • [SLOW TEST:6.058 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":254,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:12.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 30 00:58:13.013: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:15.016: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:17.015: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:19.018: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:21.016: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 30 00:58:21.031: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:23.035: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:25.036: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Oct 30 00:58:25.044: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 30 00:58:25.046: INFO: Pod pod-with-prestop-exec-hook still exists Oct 30 00:58:27.047: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 30 00:58:27.049: INFO: Pod pod-with-prestop-exec-hook still exists Oct 30 00:58:29.047: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 30 00:58:29.050: INFO: Pod pod-with-prestop-exec-hook still exists Oct 30 00:58:31.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 30 00:58:31.052: INFO: Pod pod-with-prestop-exec-hook still exists Oct 30 00:58:33.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 30 00:58:33.050: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:33.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-904" for this suite. • [SLOW TEST:20.091 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":282,"failed":0} [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:33.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:58:33.098: INFO: Got root ca configmap in namespace "svcaccounts-9709" Oct 30 00:58:33.102: INFO: Deleted root ca configmap in namespace "svcaccounts-9709" STEP: waiting for a new root ca configmap created Oct 30 00:58:33.605: INFO: Recreated root ca configmap in namespace "svcaccounts-9709" Oct 30 00:58:33.609: INFO: Updated root ca configmap in namespace "svcaccounts-9709" STEP: waiting for the root ca configmap reconciled Oct 30 00:58:34.112: INFO: Reconciled root ca configmap in namespace "svcaccounts-9709" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:34.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9709" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":14,"skipped":282,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:25.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1030 00:57:35.679446 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:58:37.698: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:37.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7503" for this suite. • [SLOW TEST:72.077 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":19,"skipped":285,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:34.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 00:58:34.489: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 00:58:36.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152314, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152314, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152314, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152314, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 00:58:39.508: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:39.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8271" for this suite. STEP: Destroying namespace "webhook-8271-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.437 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":15,"skipped":297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:27.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod Oct 30 00:58:27.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6466 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Oct 30 00:58:27.853: INFO: stderr: "" Oct 30 00:58:27.853: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. Oct 30 00:58:27.853: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Oct 30 00:58:27.853: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6466" to be "running and ready, or succeeded" Oct 30 00:58:27.858: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.661956ms Oct 30 00:58:29.864: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010731815s Oct 30 00:58:31.868: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014752951s Oct 30 00:58:33.871: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.017930014s Oct 30 00:58:33.871: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Oct 30 00:58:33.871: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Oct 30 00:58:33.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6466 logs logs-generator logs-generator' Oct 30 00:58:34.029: INFO: stderr: "" Oct 30 00:58:34.029: INFO: stdout: "I1030 00:58:30.958667 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/tpm 251\nI1030 00:58:31.159146 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/4r4 201\nI1030 00:58:31.359522 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/h4b 543\nI1030 00:58:31.558783 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/hn2 290\nI1030 00:58:31.759391 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/4r49 269\nI1030 00:58:31.959760 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/6xt 571\nI1030 00:58:32.159346 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/fmvr 598\nI1030 00:58:32.358772 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/4lb 583\nI1030 00:58:32.559610 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/8ff9 471\nI1030 00:58:32.758762 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/wjq 394\nI1030 00:58:32.959151 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/g6zv 566\nI1030 00:58:33.159449 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/vmq 288\nI1030 00:58:33.358862 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/hjlj 429\nI1030 00:58:33.559308 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/mgst 478\nI1030 00:58:33.758774 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/7t9 300\nI1030 00:58:33.959126 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/2gvs 502\n" STEP: limiting log lines Oct 30 00:58:34.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6466 logs logs-generator logs-generator --tail=1' Oct 30 00:58:34.175: INFO: stderr: "" Oct 30 00:58:34.175: INFO: stdout: "I1030 00:58:34.159502 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/khj9 476\n" Oct 30 00:58:34.175: INFO: got output "I1030 00:58:34.159502 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/khj9 476\n" STEP: limiting log bytes Oct 30 00:58:34.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6466 logs logs-generator logs-generator --limit-bytes=1' Oct 30 00:58:34.338: INFO: stderr: "" Oct 30 00:58:34.338: INFO: stdout: "I" Oct 30 00:58:34.338: INFO: got output "I" STEP: exposing timestamps Oct 30 00:58:34.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6466 logs logs-generator logs-generator --tail=1 --timestamps' Oct 30 00:58:34.495: INFO: stderr: "" Oct 30 00:58:34.495: INFO: stdout: "2021-10-30T00:58:34.358825052Z I1030 00:58:34.358751 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/c2v 517\n" Oct 30 00:58:34.495: INFO: got output "2021-10-30T00:58:34.358825052Z I1030 00:58:34.358751 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/c2v 517\n" STEP: restricting to a time range Oct 30 00:58:36.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6466 logs logs-generator logs-generator --since=1s' Oct 30 00:58:37.150: INFO: stderr: "" Oct 30 00:58:37.150: INFO: stdout: "I1030 00:58:36.159477 1 logs_generator.go:76] 26 GET /api/v1/namespaces/kube-system/pods/2ld 594\nI1030 00:58:36.358854 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/default/pods/bmf 533\nI1030 00:58:36.559484 1 logs_generator.go:76] 28 GET /api/v1/namespaces/kube-system/pods/czh 551\nI1030 00:58:36.758879 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/nnh7 301\nI1030 00:58:36.958694 1 logs_generator.go:76] 30 POST /api/v1/namespaces/ns/pods/z9cz 491\n" Oct 30 00:58:37.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6466 logs logs-generator logs-generator --since=24h' Oct 30 00:58:37.308: INFO: stderr: "" Oct 30 00:58:37.308: INFO: stdout: "I1030 00:58:30.958667 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/tpm 251\nI1030 00:58:31.159146 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/4r4 201\nI1030 00:58:31.359522 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/h4b 543\nI1030 00:58:31.558783 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/hn2 290\nI1030 00:58:31.759391 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/4r49 269\nI1030 00:58:31.959760 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/6xt 571\nI1030 00:58:32.159346 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/fmvr 598\nI1030 00:58:32.358772 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/4lb 583\nI1030 00:58:32.559610 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/8ff9 471\nI1030 00:58:32.758762 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/wjq 394\nI1030 00:58:32.959151 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/g6zv 566\nI1030 00:58:33.159449 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/vmq 288\nI1030 00:58:33.358862 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/hjlj 429\nI1030 00:58:33.559308 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/mgst 478\nI1030 00:58:33.758774 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/7t9 300\nI1030 00:58:33.959126 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/2gvs 502\nI1030 00:58:34.159502 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/khj9 476\nI1030 00:58:34.358751 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/c2v 517\nI1030 00:58:34.559289 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/hdp 370\nI1030 00:58:34.759584 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/r9pq 283\nI1030 00:58:34.958822 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/59z 337\nI1030 00:58:35.159134 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/rln 509\nI1030 00:58:35.359237 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/r8lk 597\nI1030 00:58:35.559545 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/f9z 436\nI1030 00:58:35.758752 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/fjmv 569\nI1030 00:58:35.959196 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/7cj 297\nI1030 00:58:36.159477 1 logs_generator.go:76] 26 GET /api/v1/namespaces/kube-system/pods/2ld 594\nI1030 00:58:36.358854 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/default/pods/bmf 533\nI1030 00:58:36.559484 1 logs_generator.go:76] 28 GET /api/v1/namespaces/kube-system/pods/czh 551\nI1030 00:58:36.758879 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/nnh7 301\nI1030 00:58:36.958694 1 logs_generator.go:76] 30 POST /api/v1/namespaces/ns/pods/z9cz 491\nI1030 00:58:37.159091 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/ns/pods/4snm 431\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 Oct 30 00:58:37.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6466 delete pod logs-generator' Oct 30 00:58:42.901: INFO: stderr: "" Oct 30 00:58:42.901: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:42.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6466" for this suite. • [SLOW TEST:15.239 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":28,"skipped":386,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:59.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Oct 30 00:57:59.562: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Oct 30 00:58:16.812: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:58:25.356: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:44.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8345" for this suite. • [SLOW TEST:44.632 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":13,"skipped":315,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:39.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 00:58:40.103: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 00:58:42.111: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152320, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152320, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152320, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152320, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 00:58:44.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152320, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152320, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152320, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152320, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 00:58:47.123: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:47.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-784" for this suite. STEP: Destroying namespace "webhook-784-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.546 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":16,"skipped":327,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:47.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 30 00:58:47.640: INFO: starting watch STEP: patching STEP: updating Oct 30 00:58:47.648: INFO: waiting for watch events with expected annotations Oct 30 00:58:47.648: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:47.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-4670" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":17,"skipped":337,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:47.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version Oct 30 00:58:47.726: INFO: Major version: 1 STEP: Confirm minor version Oct 30 00:58:47.726: INFO: cleanMinorVersion: 21 Oct 30 00:58:47.726: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:58:47.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-2823" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":18,"skipped":343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:42.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:58:46.988: INFO: Deleting pod "var-expansion-4b5dfc49-1564-4ac0-a133-abeb2d048169" in namespace "var-expansion-6621" Oct 30 00:58:46.994: INFO: Wait up to 5m0s for pod "var-expansion-4b5dfc49-1564-4ac0-a133-abeb2d048169" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:59:02.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6621" for this suite. • [SLOW TEST:20.056 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":29,"skipped":408,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:37.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-9683 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 30 00:58:37.764: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 00:58:37.804: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:39.809: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:41.808: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:58:43.808: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:58:45.809: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:58:47.809: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:58:49.811: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:58:51.807: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:58:53.807: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:58:55.807: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:58:57.809: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 00:58:59.808: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 30 00:58:59.816: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 30 00:59:03.839: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 30 00:59:03.839: INFO: Breadth first check of 10.244.3.168 on host 10.10.190.207... Oct 30 00:59:03.841: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.26:9080/dial?request=hostname&protocol=udp&host=10.244.3.168&port=8081&tries=1'] Namespace:pod-network-test-9683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:59:03.841: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:59:03.939: INFO: Waiting for responses: map[] Oct 30 00:59:03.939: INFO: reached 10.244.3.168 after 0/1 tries Oct 30 00:59:03.939: INFO: Breadth first check of 10.244.4.25 on host 10.10.190.208... Oct 30 00:59:03.942: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.26:9080/dial?request=hostname&protocol=udp&host=10.244.4.25&port=8081&tries=1'] Namespace:pod-network-test-9683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:59:03.942: INFO: >>> kubeConfig: /root/.kube/config Oct 30 00:59:04.267: INFO: Waiting for responses: map[] Oct 30 00:59:04.267: INFO: reached 10.244.4.25 after 0/1 tries Oct 30 00:59:04.267: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:59:04.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9683" for this suite. • [SLOW TEST:26.531 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":303,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:44.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Oct 30 00:58:44.238: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:46.242: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:48.242: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:50.243: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled Oct 30 00:58:50.256: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:52.258: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:54.259: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides Oct 30 00:58:54.273: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:56.278: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:58:58.277: INFO: The status of Pod pod3 is Running (Ready = true) Oct 30 00:58:58.290: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:59:00.295: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:59:02.293: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Oct 30 00:59:02.295: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.10.190.207 http://127.0.0.1:54323/hostname] Namespace:hostport-3026 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:59:02.295: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 Oct 30 00:59:02.391: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.10.190.207:54323/hostname] Namespace:hostport-3026 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:59:02.391: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 UDP Oct 30 00:59:02.479: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.10.190.207 54323] Namespace:hostport-3026 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 00:59:02.479: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:59:07.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-3026" for this suite. • [SLOW TEST:23.765 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:59:04.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:59:04.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-6556 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:59:10.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-4746" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:59:10.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6556" for this suite. • [SLOW TEST:6.112 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75 should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":21,"skipped":307,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:59:10.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:59:10.447: INFO: Waiting up to 5m0s for pod "busybox-user-65534-c3cf0459-4d52-4b96-bff7-709aad0ad34e" in namespace "security-context-test-8259" to be "Succeeded or Failed" Oct 30 00:59:10.449: INFO: Pod "busybox-user-65534-c3cf0459-4d52-4b96-bff7-709aad0ad34e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.957793ms Oct 30 00:59:12.453: INFO: Pod "busybox-user-65534-c3cf0459-4d52-4b96-bff7-709aad0ad34e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005956803s Oct 30 00:59:14.456: INFO: Pod "busybox-user-65534-c3cf0459-4d52-4b96-bff7-709aad0ad34e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009121361s Oct 30 00:59:16.461: INFO: Pod "busybox-user-65534-c3cf0459-4d52-4b96-bff7-709aad0ad34e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014097714s Oct 30 00:59:18.466: INFO: Pod "busybox-user-65534-c3cf0459-4d52-4b96-bff7-709aad0ad34e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019580947s Oct 30 00:59:20.471: INFO: Pod "busybox-user-65534-c3cf0459-4d52-4b96-bff7-709aad0ad34e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024415382s Oct 30 00:59:20.471: INFO: Pod "busybox-user-65534-c3cf0459-4d52-4b96-bff7-709aad0ad34e" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:59:20.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8259" for this suite. • [SLOW TEST:10.066 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":314,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:59:20.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates Oct 30 00:59:20.530: INFO: created test-podtemplate-1 Oct 30 00:59:20.533: INFO: created test-podtemplate-2 Oct 30 00:59:20.537: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Oct 30 00:59:20.539: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Oct 30 00:59:20.549: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:59:20.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-5903" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":23,"skipped":327,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:59:20.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args Oct 30 00:59:20.608: INFO: Waiting up to 5m0s for pod "var-expansion-8fa454eb-6b99-4ddc-a238-db5ed59f2aaa" in namespace "var-expansion-1084" to be "Succeeded or Failed" Oct 30 00:59:20.610: INFO: Pod "var-expansion-8fa454eb-6b99-4ddc-a238-db5ed59f2aaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035795ms Oct 30 00:59:22.615: INFO: Pod "var-expansion-8fa454eb-6b99-4ddc-a238-db5ed59f2aaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007108897s Oct 30 00:59:24.618: INFO: Pod "var-expansion-8fa454eb-6b99-4ddc-a238-db5ed59f2aaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010346196s STEP: Saw pod success Oct 30 00:59:24.618: INFO: Pod "var-expansion-8fa454eb-6b99-4ddc-a238-db5ed59f2aaa" satisfied condition "Succeeded or Failed" Oct 30 00:59:24.621: INFO: Trying to get logs from node node2 pod var-expansion-8fa454eb-6b99-4ddc-a238-db5ed59f2aaa container dapi-container: STEP: delete the pod Oct 30 00:59:24.635: INFO: Waiting for pod var-expansion-8fa454eb-6b99-4ddc-a238-db5ed59f2aaa to disappear Oct 30 00:59:24.637: INFO: Pod var-expansion-8fa454eb-6b99-4ddc-a238-db5ed59f2aaa no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:59:24.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1084" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":334,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:54:25.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-8799 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8799 STEP: Creating statefulset with conflicting port in namespace statefulset-8799 STEP: Waiting until pod test-pod will start running in namespace statefulset-8799 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8799 Oct 30 00:59:29.438: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00020bb00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00020bb00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00020bb00, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 30 00:59:29.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8799 describe po test-pod' Oct 30 00:59:29.631: INFO: stderr: "" Oct 30 00:59:29.631: INFO: stdout: "Name: test-pod\nNamespace: statefulset-8799\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Sat, 30 Oct 2021 00:54:25 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.203\"\n ],\n \"mac\": \"52:c1:28:13:f6:ba\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.203\"\n ],\n \"mac\": \"52:c1:28:13:f6:ba\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.4.203\nIPs:\n IP: 10.244.4.203\nContainers:\n webserver:\n Container ID: docker://c41b2885b4ff139cc7f635d1615d0fb72f5539aea5593eb1713a245a41698638\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Sat, 30 Oct 2021 00:54:28 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkxw5 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-fkxw5:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulling 5m2s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n Normal Pulled 5m2s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" in 302.418678ms\n Normal Created 5m1s kubelet Created container webserver\n Normal Started 5m1s kubelet Started container webserver\n" Oct 30 00:59:29.631: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-8799 Priority: 0 Node: node2/10.10.190.208 Start Time: Sat, 30 Oct 2021 00:54:25 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.203" ], "mac": "52:c1:28:13:f6:ba", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.203" ], "mac": "52:c1:28:13:f6:ba", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.4.203 IPs: IP: 10.244.4.203 Containers: webserver: Container ID: docker://c41b2885b4ff139cc7f635d1615d0fb72f5539aea5593eb1713a245a41698638 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Sat, 30 Oct 2021 00:54:28 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkxw5 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-fkxw5: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m2s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Normal Pulled 5m2s kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 302.418678ms Normal Created 5m1s kubelet Created container webserver Normal Started 5m1s kubelet Started container webserver Oct 30 00:59:29.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8799 logs test-pod --tail=100' Oct 30 00:59:29.787: INFO: stderr: "" Oct 30 00:59:29.787: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.203. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.203. Set the 'ServerName' directive globally to suppress this message\n[Sat Oct 30 00:54:28.255959 2021] [mpm_event:notice] [pid 1:tid 140042956995432] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Oct 30 00:54:28.256003 2021] [core:notice] [pid 1:tid 140042956995432] AH00094: Command line: 'httpd -D FOREGROUND'\n" Oct 30 00:59:29.787: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.203. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.203. Set the 'ServerName' directive globally to suppress this message [Sat Oct 30 00:54:28.255959 2021] [mpm_event:notice] [pid 1:tid 140042956995432] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Sat Oct 30 00:54:28.256003 2021] [core:notice] [pid 1:tid 140042956995432] AH00094: Command line: 'httpd -D FOREGROUND' Oct 30 00:59:29.787: INFO: Deleting all statefulset in ns statefulset-8799 Oct 30 00:59:29.789: INFO: Scaling statefulset ss to 0 Oct 30 00:59:29.797: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 00:59:29.800: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "statefulset-8799". STEP: Found 7 events. Oct 30 00:59:29.810: INFO: At 2021-10-30 00:54:25 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] Oct 30 00:59:29.810: INFO: At 2021-10-30 00:54:25 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] Oct 30 00:59:29.810: INFO: At 2021-10-30 00:54:25 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100]] Oct 30 00:59:29.810: INFO: At 2021-10-30 00:54:27 +0000 UTC - event for test-pod: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Oct 30 00:59:29.810: INFO: At 2021-10-30 00:54:27 +0000 UTC - event for test-pod: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 302.418678ms Oct 30 00:59:29.810: INFO: At 2021-10-30 00:54:28 +0000 UTC - event for test-pod: {kubelet node2} Created: Created container webserver Oct 30 00:59:29.810: INFO: At 2021-10-30 00:54:28 +0000 UTC - event for test-pod: {kubelet node2} Started: Started container webserver Oct 30 00:59:29.813: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 00:59:29.813: INFO: test-pod node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:54:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:54:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:54:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:54:25 +0000 UTC }] Oct 30 00:59:29.813: INFO: Oct 30 00:59:29.817: INFO: Logging node info for node master1 Oct 30 00:59:29.820: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 71361 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:27 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:27 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:27 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:59:27 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:59:29.820: INFO: Logging kubelet events for node master1 Oct 30 00:59:29.822: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 00:59:29.849: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:29.849: INFO: Container kube-scheduler ready: true, restart count 0 Oct 30 00:59:29.849: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:29.849: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 00:59:29.849: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:29.849: INFO: Container coredns ready: true, restart count 1 Oct 30 00:59:29.849: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 00:59:29.849: INFO: Container docker-registry ready: true, restart count 0 Oct 30 00:59:29.849: INFO: Container nginx ready: true, restart count 0 Oct 30 00:59:29.849: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:59:29.849: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:59:29.849: INFO: Container node-exporter ready: true, restart count 0 Oct 30 00:59:29.849: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:29.849: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 00:59:29.849: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:29.849: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 30 00:59:29.849: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:59:29.849: INFO: Init container install-cni ready: true, restart count 0 Oct 30 00:59:29.849: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 00:59:29.849: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:29.849: INFO: Container kube-multus ready: true, restart count 1 W1030 00:59:29.864503 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:59:29.934: INFO: Latency metrics for node master1 Oct 30 00:59:29.934: INFO: Logging node info for node master2 Oct 30 00:59:29.937: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 71311 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:24 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:24 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:24 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:59:24 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:59:29.937: INFO: Logging kubelet events for node master2 Oct 30 00:59:29.939: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 00:59:29.954: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:29.954: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 00:59:29.954: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:29.954: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 00:59:29.954: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:29.954: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 00:59:29.954: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:29.954: INFO: Container kube-proxy ready: true, restart count 2 Oct 30 00:59:29.954: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:59:29.954: INFO: Init container install-cni ready: true, restart count 2 Oct 30 00:59:29.954: INFO: Container kube-flannel ready: true, restart count 1 Oct 30 00:59:29.954: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:29.954: INFO: Container kube-multus ready: true, restart count 1 Oct 30 00:59:29.954: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:59:29.954: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:59:29.954: INFO: Container node-exporter ready: true, restart count 0 W1030 00:59:29.970578 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:59:30.029: INFO: Latency metrics for node master2 Oct 30 00:59:30.029: INFO: Logging node info for node master3 Oct 30 00:59:30.031: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 71303 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:22 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:22 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:22 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:59:22 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:59:30.032: INFO: Logging kubelet events for node master3 Oct 30 00:59:30.035: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 00:59:30.051: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 00:59:30.051: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:59:30.051: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 00:59:30.051: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:59:30.051: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:59:30.051: INFO: Container node-exporter ready: true, restart count 0 Oct 30 00:59:30.051: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.051: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 30 00:59:30.051: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.051: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 00:59:30.051: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:59:30.051: INFO: Init container install-cni ready: true, restart count 2 Oct 30 00:59:30.051: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 00:59:30.051: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.051: INFO: Container kube-multus ready: true, restart count 1 Oct 30 00:59:30.051: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.051: INFO: Container coredns ready: true, restart count 1 Oct 30 00:59:30.051: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.051: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 00:59:30.051: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.051: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 00:59:30.051: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.051: INFO: Container autoscaler ready: true, restart count 1 Oct 30 00:59:30.051: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.051: INFO: Container nfd-controller ready: true, restart count 0 W1030 00:59:30.065761 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:59:30.153: INFO: Latency metrics for node master3 Oct 30 00:59:30.153: INFO: Logging node info for node node1 Oct 30 00:59:30.156: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 71352 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:26 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:26 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:26 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:59:26 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:59:30.157: INFO: Logging kubelet events for node node1 Oct 30 00:59:30.160: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 00:59:30.176: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 00:59:30.176: INFO: Container nodereport ready: true, restart count 0 Oct 30 00:59:30.176: INFO: Container reconcile ready: true, restart count 0 Oct 30 00:59:30.176: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:59:30.176: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:59:30.176: INFO: Container node-exporter ready: true, restart count 0 Oct 30 00:59:30.176: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 00:59:30.176: INFO: Container config-reloader ready: true, restart count 0 Oct 30 00:59:30.176: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 00:59:30.176: INFO: Container grafana ready: true, restart count 0 Oct 30 00:59:30.176: INFO: Container prometheus ready: true, restart count 1 Oct 30 00:59:30.176: INFO: pod2 started at 2021-10-30 00:59:29 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.176: INFO: Container agnhost-container ready: false, restart count 0 Oct 30 00:59:30.176: INFO: nodeport-test-c5pl8 started at 2021-10-30 00:58:07 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.176: INFO: Container nodeport-test ready: true, restart count 0 Oct 30 00:59:30.176: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.176: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 00:59:30.176: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.176: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 00:59:30.176: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 00:59:30.176: INFO: Container discover ready: false, restart count 0 Oct 30 00:59:30.176: INFO: Container init ready: false, restart count 0 Oct 30 00:59:30.177: INFO: Container install ready: false, restart count 0 Oct 30 00:59:30.177: INFO: simpletest.rc-xm6n4 started at 2021-10-30 00:58:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.177: INFO: Container nginx ready: true, restart count 0 Oct 30 00:59:30.177: INFO: simpletest.rc-j8rr5 started at 2021-10-30 00:58:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.177: INFO: Container nginx ready: true, restart count 0 Oct 30 00:59:30.177: INFO: simpletest.rc-4q47j started at 2021-10-30 00:58:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.177: INFO: Container nginx ready: true, restart count 0 Oct 30 00:59:30.177: INFO: simpletest.rc-8rdg4 started at 2021-10-30 00:58:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.177: INFO: Container nginx ready: true, restart count 0 Oct 30 00:59:30.177: INFO: simpletest.rc-4qfnt started at 2021-10-30 00:58:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.177: INFO: Container nginx ready: true, restart count 0 Oct 30 00:59:30.177: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.177: INFO: Container kube-multus ready: true, restart count 1 Oct 30 00:59:30.177: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.177: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 00:59:30.177: INFO: busybox-353bf135-1cbf-4baf-bd9e-bfc80e976260 started at 2021-10-30 00:59:08 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.177: INFO: Container busybox ready: true, restart count 0 Oct 30 00:59:30.177: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:59:30.177: INFO: Init container install-cni ready: true, restart count 2 Oct 30 00:59:30.177: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 00:59:30.177: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.177: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 00:59:30.177: INFO: simpletest.rc-l2gl6 started at 2021-10-30 00:58:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.177: INFO: Container nginx ready: true, restart count 0 Oct 30 00:59:30.177: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.177: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 00:59:30.177: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 00:59:30.177: INFO: Container collectd ready: true, restart count 0 Oct 30 00:59:30.177: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 00:59:30.177: INFO: Container rbac-proxy ready: true, restart count 0 W1030 00:59:30.192817 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:59:30.467: INFO: Latency metrics for node node1 Oct 30 00:59:30.467: INFO: Logging node info for node node2 Oct 30 00:59:30.471: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 71289 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:22 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:22 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 00:59:22 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 00:59:22 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 00:59:30.471: INFO: Logging kubelet events for node node2 Oct 30 00:59:30.474: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 00:59:30.494: INFO: externalname-service-7plvb started at 2021-10-30 00:58:27 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container externalname-service ready: true, restart count 0 Oct 30 00:59:30.494: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 00:59:30.494: INFO: simpletest.rc-8j9kz started at 2021-10-30 00:58:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container nginx ready: true, restart count 0 Oct 30 00:59:30.494: INFO: simpletest.rc-85cjw started at 2021-10-30 00:58:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container nginx ready: true, restart count 0 Oct 30 00:59:30.494: INFO: execpodptmfk started at 2021-10-30 00:58:33 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 00:59:30.494: INFO: pod-secrets-d98414fa-6698-4fd4-ad68-902ae4f1233d started at 2021-10-30 00:59:03 +0000 UTC (0+3 container statuses recorded) Oct 30 00:59:30.494: INFO: Container creates-volume-test ready: true, restart count 0 Oct 30 00:59:30.494: INFO: Container dels-volume-test ready: true, restart count 0 Oct 30 00:59:30.494: INFO: Container upds-volume-test ready: true, restart count 0 Oct 30 00:59:30.494: INFO: test-pod started at 2021-10-30 00:54:25 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container webserver ready: true, restart count 0 Oct 30 00:59:30.494: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 00:59:30.494: INFO: pod1 started at 2021-10-30 00:59:25 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 00:59:30.494: INFO: simpletest.rc-ltdd4 started at 2021-10-30 00:58:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container nginx ready: true, restart count 0 Oct 30 00:59:30.494: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Init container install-cni ready: true, restart count 2 Oct 30 00:59:30.494: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 00:59:30.494: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container kube-multus ready: true, restart count 1 Oct 30 00:59:30.494: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 00:59:30.494: INFO: nodeport-test-m797r started at 2021-10-30 00:58:07 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container nodeport-test ready: true, restart count 0 Oct 30 00:59:30.494: INFO: execpodmfc69 started at 2021-10-30 00:58:13 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 00:59:30.494: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 00:59:30.494: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 00:59:30.494: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 00:59:30.494: INFO: Container discover ready: false, restart count 0 Oct 30 00:59:30.494: INFO: Container init ready: false, restart count 0 Oct 30 00:59:30.494: INFO: Container install ready: false, restart count 0 Oct 30 00:59:30.494: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 00:59:30.494: INFO: simpletest.rc-kz4ph started at 2021-10-30 00:58:10 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container nginx ready: true, restart count 0 Oct 30 00:59:30.494: INFO: replace-27259259-7v89c started at 2021-10-30 00:59:00 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container c ready: true, restart count 0 Oct 30 00:59:30.494: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 00:59:30.494: INFO: Container nodereport ready: true, restart count 0 Oct 30 00:59:30.494: INFO: Container reconcile ready: true, restart count 0 Oct 30 00:59:30.494: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 00:59:30.494: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 00:59:30.494: INFO: Container node-exporter ready: true, restart count 0 Oct 30 00:59:30.494: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.494: INFO: Container tas-extender ready: true, restart count 0 Oct 30 00:59:30.494: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 00:59:30.494: INFO: Container collectd ready: true, restart count 0 Oct 30 00:59:30.494: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 00:59:30.494: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 00:59:30.495: INFO: externalname-service-jjz59 started at 2021-10-30 00:58:27 +0000 UTC (0+1 container statuses recorded) Oct 30 00:59:30.495: INFO: Container externalname-service ready: true, restart count 0 W1030 00:59:30.507184 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:59:30.753: INFO: Latency metrics for node node2 Oct 30 00:59:30.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8799" for this suite. • Failure [305.376 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 00:59:29.438: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":6,"skipped":143,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:59:24.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-941 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-941 to expose endpoints map[] Oct 30 00:59:24.700: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Oct 30 00:59:25.706: INFO: successfully validated that service endpoint-test2 in namespace services-941 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-941 Oct 30 00:59:25.720: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:59:27.724: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:59:29.724: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-941 to expose endpoints map[pod1:[80]] Oct 30 00:59:29.734: INFO: successfully validated that service endpoint-test2 in namespace services-941 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-941 Oct 30 00:59:29.746: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:59:31.750: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:59:33.749: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-941 to expose endpoints map[pod1:[80] pod2:[80]] Oct 30 00:59:33.762: INFO: successfully validated that service endpoint-test2 in namespace services-941 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-941 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-941 to expose endpoints map[pod2:[80]] Oct 30 00:59:33.778: INFO: successfully validated that service endpoint-test2 in namespace services-941 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-941 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-941 to expose endpoints map[] Oct 30 00:59:33.790: INFO: successfully validated that service endpoint-test2 in namespace services-941 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:59:33.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-941" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:9.142 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":25,"skipped":343,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:59:33.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: Oct 30 00:59:33.863: INFO: Waiting up to 5m0s for pod "test-pod-37ab4869-51ec-4934-b85d-617f8c584ff4" in namespace "svcaccounts-9786" to be "Succeeded or Failed" Oct 30 00:59:33.865: INFO: Pod "test-pod-37ab4869-51ec-4934-b85d-617f8c584ff4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263171ms Oct 30 00:59:35.867: INFO: Pod "test-pod-37ab4869-51ec-4934-b85d-617f8c584ff4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004811994s Oct 30 00:59:37.871: INFO: Pod "test-pod-37ab4869-51ec-4934-b85d-617f8c584ff4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008601751s STEP: Saw pod success Oct 30 00:59:37.871: INFO: Pod "test-pod-37ab4869-51ec-4934-b85d-617f8c584ff4" satisfied condition "Succeeded or Failed" Oct 30 00:59:37.875: INFO: Trying to get logs from node node1 pod test-pod-37ab4869-51ec-4934-b85d-617f8c584ff4 container agnhost-container: STEP: delete the pod Oct 30 00:59:37.890: INFO: Waiting for pod test-pod-37ab4869-51ec-4934-b85d-617f8c584ff4 to disappear Oct 30 00:59:37.892: INFO: Pod test-pod-37ab4869-51ec-4934-b85d-617f8c584ff4 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:59:37.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9786" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":26,"skipped":355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:10.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1030 00:58:50.555826 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 00:59:52.573: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 30 00:59:52.573: INFO: Deleting pod "simpletest.rc-4q47j" in namespace "gc-3254" Oct 30 00:59:52.581: INFO: Deleting pod "simpletest.rc-4qfnt" in namespace "gc-3254" Oct 30 00:59:52.587: INFO: Deleting pod "simpletest.rc-85cjw" in namespace "gc-3254" Oct 30 00:59:52.593: INFO: Deleting pod "simpletest.rc-8j9kz" in namespace "gc-3254" Oct 30 00:59:52.600: INFO: Deleting pod "simpletest.rc-8rdg4" in namespace "gc-3254" Oct 30 00:59:52.605: INFO: Deleting pod "simpletest.rc-j8rr5" in namespace "gc-3254" Oct 30 00:59:52.613: INFO: Deleting pod "simpletest.rc-kz4ph" in namespace "gc-3254" Oct 30 00:59:52.618: INFO: Deleting pod "simpletest.rc-l2gl6" in namespace "gc-3254" Oct 30 00:59:52.623: INFO: Deleting pod "simpletest.rc-ltdd4" in namespace "gc-3254" Oct 30 00:59:52.630: INFO: Deleting pod "simpletest.rc-xm6n4" in namespace "gc-3254" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 00:59:52.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3254" for this suite. • [SLOW TEST:102.167 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":33,"skipped":531,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:47.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1030 00:58:47.874856 31 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:01.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-1028" for this suite. • [SLOW TEST:74.051 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":19,"skipped":414,"failed":0} SSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":328,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:59:07.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-353bf135-1cbf-4baf-bd9e-bfc80e976260 in namespace container-probe-1400 Oct 30 00:59:18.003: INFO: Started pod busybox-353bf135-1cbf-4baf-bd9e-bfc80e976260 in namespace container-probe-1400 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 00:59:18.007: INFO: Initial restart count of pod busybox-353bf135-1cbf-4baf-bd9e-bfc80e976260 is 0 Oct 30 01:00:10.107: INFO: Restart count of pod container-probe-1400/busybox-353bf135-1cbf-4baf-bd9e-bfc80e976260 is now 1 (52.100274962s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:10.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1400" for this suite. • [SLOW TEST:62.159 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":328,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:59:52.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3445 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3445 STEP: creating replication controller externalsvc in namespace services-3445 I1030 00:59:52.733491 38 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3445, replica count: 2 I1030 00:59:55.785874 38 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:59:58.786312 38 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Oct 30 00:59:58.798: INFO: Creating new exec pod Oct 30 01:00:02.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3445 exec execpodtzdl4 -- /bin/sh -x -c nslookup clusterip-service.services-3445.svc.cluster.local' Oct 30 01:00:03.057: INFO: stderr: "+ nslookup clusterip-service.services-3445.svc.cluster.local\n" Oct 30 01:00:03.057: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-3445.svc.cluster.local\tcanonical name = externalsvc.services-3445.svc.cluster.local.\nName:\texternalsvc.services-3445.svc.cluster.local\nAddress: 10.233.60.140\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3445, will wait for the garbage collector to delete the pods Oct 30 01:00:03.115: INFO: Deleting ReplicationController externalsvc took: 3.545588ms Oct 30 01:00:03.215: INFO: Terminating ReplicationController externalsvc pods took: 100.556311ms Oct 30 01:00:13.225: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:13.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3445" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:20.553 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":34,"skipped":552,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:59:03.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-48bbad59-67f9-48f5-b415-4d2b50cb07e0 STEP: Creating secret with name s-test-opt-upd-98d051f3-bf9b-4bee-aa71-22039ca40277 STEP: Creating the pod Oct 30 00:59:03.114: INFO: The status of Pod pod-secrets-d98414fa-6698-4fd4-ad68-902ae4f1233d is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:59:05.118: INFO: The status of Pod pod-secrets-d98414fa-6698-4fd4-ad68-902ae4f1233d is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:59:07.118: INFO: The status of Pod pod-secrets-d98414fa-6698-4fd4-ad68-902ae4f1233d is Pending, waiting for it to be Running (with Ready = true) Oct 30 00:59:09.118: INFO: The status of Pod pod-secrets-d98414fa-6698-4fd4-ad68-902ae4f1233d is Running (Ready = true) STEP: Deleting secret s-test-opt-del-48bbad59-67f9-48f5-b415-4d2b50cb07e0 STEP: Updating secret s-test-opt-upd-98d051f3-bf9b-4bee-aa71-22039ca40277 STEP: Creating secret with name s-test-opt-create-85228a95-3afc-4f30-81d9-368896b5f5bc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:16.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-733" for this suite. • [SLOW TEST:73.432 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":443,"failed":0} [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:16.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Oct 30 01:00:16.535: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2780 6305ebeb-86de-4487-8813-c3f34aaea34e 71940 0 2021-10-30 01:00:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-30 01:00:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:00:16.535: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2780 6305ebeb-86de-4487-8813-c3f34aaea34e 71941 0 2021-10-30 01:00:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-30 01:00:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Oct 30 01:00:16.549: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2780 6305ebeb-86de-4487-8813-c3f34aaea34e 71942 0 2021-10-30 01:00:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-30 01:00:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:00:16.549: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2780 6305ebeb-86de-4487-8813-c3f34aaea34e 71943 0 2021-10-30 01:00:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-30 01:00:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:16.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2780" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":31,"skipped":443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:10.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 30 01:00:10.163: INFO: Waiting up to 5m0s for pod "pod-b8c163ba-033d-4aeb-bc23-e152f1afb2d6" in namespace "emptydir-8614" to be "Succeeded or Failed" Oct 30 01:00:10.165: INFO: Pod "pod-b8c163ba-033d-4aeb-bc23-e152f1afb2d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048553ms Oct 30 01:00:12.167: INFO: Pod "pod-b8c163ba-033d-4aeb-bc23-e152f1afb2d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004698438s Oct 30 01:00:14.171: INFO: Pod "pod-b8c163ba-033d-4aeb-bc23-e152f1afb2d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008211695s Oct 30 01:00:16.176: INFO: Pod "pod-b8c163ba-033d-4aeb-bc23-e152f1afb2d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013555373s Oct 30 01:00:18.180: INFO: Pod "pod-b8c163ba-033d-4aeb-bc23-e152f1afb2d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017469099s Oct 30 01:00:20.186: INFO: Pod "pod-b8c163ba-033d-4aeb-bc23-e152f1afb2d6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023179849s Oct 30 01:00:22.191: INFO: Pod "pod-b8c163ba-033d-4aeb-bc23-e152f1afb2d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.028304977s STEP: Saw pod success Oct 30 01:00:22.191: INFO: Pod "pod-b8c163ba-033d-4aeb-bc23-e152f1afb2d6" satisfied condition "Succeeded or Failed" Oct 30 01:00:22.194: INFO: Trying to get logs from node node1 pod pod-b8c163ba-033d-4aeb-bc23-e152f1afb2d6 container test-container: STEP: delete the pod Oct 30 01:00:22.206: INFO: Waiting for pod pod-b8c163ba-033d-4aeb-bc23-e152f1afb2d6 to disappear Oct 30 01:00:22.209: INFO: Pod pod-b8c163ba-033d-4aeb-bc23-e152f1afb2d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:22.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8614" for this suite. • [SLOW TEST:12.087 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":330,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:13.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:00:13.280: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78125bf5-2997-4749-a1f9-6c772228fd22" in namespace "downward-api-3339" to be "Succeeded or Failed" Oct 30 01:00:13.282: INFO: Pod "downwardapi-volume-78125bf5-2997-4749-a1f9-6c772228fd22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.554935ms Oct 30 01:00:15.285: INFO: Pod "downwardapi-volume-78125bf5-2997-4749-a1f9-6c772228fd22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005524667s Oct 30 01:00:17.288: INFO: Pod "downwardapi-volume-78125bf5-2997-4749-a1f9-6c772228fd22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008695749s Oct 30 01:00:19.292: INFO: Pod "downwardapi-volume-78125bf5-2997-4749-a1f9-6c772228fd22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012638769s Oct 30 01:00:21.297: INFO: Pod "downwardapi-volume-78125bf5-2997-4749-a1f9-6c772228fd22": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017393867s Oct 30 01:00:23.300: INFO: Pod "downwardapi-volume-78125bf5-2997-4749-a1f9-6c772228fd22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.020627557s STEP: Saw pod success Oct 30 01:00:23.300: INFO: Pod "downwardapi-volume-78125bf5-2997-4749-a1f9-6c772228fd22" satisfied condition "Succeeded or Failed" Oct 30 01:00:23.302: INFO: Trying to get logs from node node1 pod downwardapi-volume-78125bf5-2997-4749-a1f9-6c772228fd22 container client-container: STEP: delete the pod Oct 30 01:00:23.319: INFO: Waiting for pod downwardapi-volume-78125bf5-2997-4749-a1f9-6c772228fd22 to disappear Oct 30 01:00:23.321: INFO: Pod downwardapi-volume-78125bf5-2997-4749-a1f9-6c772228fd22 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:23.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3339" for this suite. • [SLOW TEST:10.080 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":555,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:07.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-6248 STEP: creating replication controller nodeport-test in namespace services-6248 I1030 00:58:07.223621 29 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6248, replica count: 2 I1030 00:58:10.274914 29 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:58:13.276835 29 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 00:58:13.276: INFO: Creating new exec pod Oct 30 00:58:22.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Oct 30 00:58:22.910: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Oct 30 00:58:22.910: INFO: stdout: "nodeport-test-m797r" Oct 30 00:58:22.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.16.8 80' Oct 30 00:58:23.638: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.16.8 80\nConnection to 10.233.16.8 80 port [tcp/http] succeeded!\n" Oct 30 00:58:23.638: INFO: stdout: "nodeport-test-c5pl8" Oct 30 00:58:23.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:23.852: INFO: rc: 1 Oct 30 00:58:23.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:24.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:25.077: INFO: rc: 1 Oct 30 00:58:25.077: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:25.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:26.106: INFO: rc: 1 Oct 30 00:58:26.106: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:26.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:27.170: INFO: rc: 1 Oct 30 00:58:27.170: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:27.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:28.108: INFO: rc: 1 Oct 30 00:58:28.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:28.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:29.242: INFO: rc: 1 Oct 30 00:58:29.242: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:29.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:30.161: INFO: rc: 1 Oct 30 00:58:30.161: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:30.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:31.078: INFO: rc: 1 Oct 30 00:58:31.078: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:31.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:32.068: INFO: rc: 1 Oct 30 00:58:32.069: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:32.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:33.205: INFO: rc: 1 Oct 30 00:58:33.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:33.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:34.101: INFO: rc: 1 Oct 30 00:58:34.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:34.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:35.214: INFO: rc: 1 Oct 30 00:58:35.215: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:35.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:36.293: INFO: rc: 1 Oct 30 00:58:36.294: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:36.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:37.097: INFO: rc: 1 Oct 30 00:58:37.097: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:37.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:38.174: INFO: rc: 1 Oct 30 00:58:38.174: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:38.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:39.136: INFO: rc: 1 Oct 30 00:58:39.136: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:39.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:40.280: INFO: rc: 1 Oct 30 00:58:40.280: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:40.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:41.104: INFO: rc: 1 Oct 30 00:58:41.105: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:41.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:42.351: INFO: rc: 1 Oct 30 00:58:42.351: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:42.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:43.115: INFO: rc: 1 Oct 30 00:58:43.115: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:43.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:44.123: INFO: rc: 1 Oct 30 00:58:44.123: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:44.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:45.152: INFO: rc: 1 Oct 30 00:58:45.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:45.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:46.103: INFO: rc: 1 Oct 30 00:58:46.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:46.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:47.198: INFO: rc: 1 Oct 30 00:58:47.198: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:47.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:48.118: INFO: rc: 1 Oct 30 00:58:48.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:48.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:49.126: INFO: rc: 1 Oct 30 00:58:49.126: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:49.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:50.113: INFO: rc: 1 Oct 30 00:58:50.113: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:50.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:51.139: INFO: rc: 1 Oct 30 00:58:51.139: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:51.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:52.111: INFO: rc: 1 Oct 30 00:58:52.111: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:52.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:53.139: INFO: rc: 1 Oct 30 00:58:53.139: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:53.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:54.118: INFO: rc: 1 Oct 30 00:58:54.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:54.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:55.107: INFO: rc: 1 Oct 30 00:58:55.107: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:55.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:56.119: INFO: rc: 1 Oct 30 00:58:56.119: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:56.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:57.113: INFO: rc: 1 Oct 30 00:58:57.113: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:57.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:58.101: INFO: rc: 1 Oct 30 00:58:58.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:58.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:58:59.108: INFO: rc: 1 Oct 30 00:58:59.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:59.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:00.083: INFO: rc: 1 Oct 30 00:59:00.084: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:00.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:01.480: INFO: rc: 1 Oct 30 00:59:01.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:01.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:02.200: INFO: rc: 1 Oct 30 00:59:02.200: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:02.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:03.098: INFO: rc: 1 Oct 30 00:59:03.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:03.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:04.498: INFO: rc: 1 Oct 30 00:59:04.498: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:04.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:05.102: INFO: rc: 1 Oct 30 00:59:05.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:05.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:06.096: INFO: rc: 1 Oct 30 00:59:06.096: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:06.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:07.138: INFO: rc: 1 Oct 30 00:59:07.138: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:07.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:08.105: INFO: rc: 1 Oct 30 00:59:08.105: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:08.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:09.110: INFO: rc: 1 Oct 30 00:59:09.110: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:09.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:10.246: INFO: rc: 1 Oct 30 00:59:10.246: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:10.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:12.024: INFO: rc: 1 Oct 30 00:59:12.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:12.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:13.092: INFO: rc: 1 Oct 30 00:59:13.092: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:13.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:14.098: INFO: rc: 1 Oct 30 00:59:14.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:14.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:15.118: INFO: rc: 1 Oct 30 00:59:15.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:15.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:16.118: INFO: rc: 1 Oct 30 00:59:16.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:16.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:17.169: INFO: rc: 1 Oct 30 00:59:17.169: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:17.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:18.107: INFO: rc: 1 Oct 30 00:59:18.107: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:18.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:19.111: INFO: rc: 1 Oct 30 00:59:19.111: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:19.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:20.128: INFO: rc: 1 Oct 30 00:59:20.128: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:20.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:21.118: INFO: rc: 1 Oct 30 00:59:21.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:21.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:22.199: INFO: rc: 1 Oct 30 00:59:22.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:22.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:23.094: INFO: rc: 1 Oct 30 00:59:23.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:23.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:24.133: INFO: rc: 1 Oct 30 00:59:24.133: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:24.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:25.089: INFO: rc: 1 Oct 30 00:59:25.089: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo+ hostName nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:25.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:26.153: INFO: rc: 1 Oct 30 00:59:26.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:26.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:27.208: INFO: rc: 1 Oct 30 00:59:27.208: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:27.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:28.117: INFO: rc: 1 Oct 30 00:59:28.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:28.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:29.094: INFO: rc: 1 Oct 30 00:59:29.095: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:29.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:30.112: INFO: rc: 1 Oct 30 00:59:30.112: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:30.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:31.134: INFO: rc: 1 Oct 30 00:59:31.134: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:31.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:32.121: INFO: rc: 1 Oct 30 00:59:32.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:32.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:33.122: INFO: rc: 1 Oct 30 00:59:33.122: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:33.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:34.228: INFO: rc: 1 Oct 30 00:59:34.229: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:34.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:35.124: INFO: rc: 1 Oct 30 00:59:35.124: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:35.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:36.152: INFO: rc: 1 Oct 30 00:59:36.152: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:36.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:37.118: INFO: rc: 1 Oct 30 00:59:37.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:37.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:38.102: INFO: rc: 1 Oct 30 00:59:38.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:38.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:39.470: INFO: rc: 1 Oct 30 00:59:39.470: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:39.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:40.268: INFO: rc: 1 Oct 30 00:59:40.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:40.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:42.040: INFO: rc: 1 Oct 30 00:59:42.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:42.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:43.147: INFO: rc: 1 Oct 30 00:59:43.147: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:43.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:44.147: INFO: rc: 1 Oct 30 00:59:44.147: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:44.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:45.121: INFO: rc: 1 Oct 30 00:59:45.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:45.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:46.181: INFO: rc: 1 Oct 30 00:59:46.181: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:46.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:47.173: INFO: rc: 1 Oct 30 00:59:47.173: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:47.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:48.098: INFO: rc: 1 Oct 30 00:59:48.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:48.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:49.102: INFO: rc: 1 Oct 30 00:59:49.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:49.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:50.108: INFO: rc: 1 Oct 30 00:59:50.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:50.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:51.135: INFO: rc: 1 Oct 30 00:59:51.135: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:51.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:52.094: INFO: rc: 1 Oct 30 00:59:52.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:52.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:53.164: INFO: rc: 1 Oct 30 00:59:53.164: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:53.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:54.151: INFO: rc: 1 Oct 30 00:59:54.151: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:54.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:55.239: INFO: rc: 1 Oct 30 00:59:55.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:55.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:56.104: INFO: rc: 1 Oct 30 00:59:56.104: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:56.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:57.126: INFO: rc: 1 Oct 30 00:59:57.126: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:57.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:58.098: INFO: rc: 1 Oct 30 00:59:58.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:58.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 00:59:59.105: INFO: rc: 1 Oct 30 00:59:59.105: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:59.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:00.096: INFO: rc: 1 Oct 30 01:00:00.096: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:00.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:01.099: INFO: rc: 1 Oct 30 01:00:01.099: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:01.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:02.125: INFO: rc: 1 Oct 30 01:00:02.125: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:02.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:03.092: INFO: rc: 1 Oct 30 01:00:03.093: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:03.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:04.102: INFO: rc: 1 Oct 30 01:00:04.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:04.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:05.100: INFO: rc: 1 Oct 30 01:00:05.100: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:05.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:06.076: INFO: rc: 1 Oct 30 01:00:06.076: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:06.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:07.148: INFO: rc: 1 Oct 30 01:00:07.148: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:07.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:08.114: INFO: rc: 1 Oct 30 01:00:08.114: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:08.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:09.097: INFO: rc: 1 Oct 30 01:00:09.097: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:09.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:10.116: INFO: rc: 1 Oct 30 01:00:10.116: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:10.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:12.020: INFO: rc: 1 Oct 30 01:00:12.020: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:12.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:13.125: INFO: rc: 1 Oct 30 01:00:13.125: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:13.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:14.094: INFO: rc: 1 Oct 30 01:00:14.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:14.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:15.105: INFO: rc: 1 Oct 30 01:00:15.105: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:15.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:16.100: INFO: rc: 1 Oct 30 01:00:16.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:16.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:17.108: INFO: rc: 1 Oct 30 01:00:17.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:17.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:18.126: INFO: rc: 1 Oct 30 01:00:18.126: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:18.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:19.116: INFO: rc: 1 Oct 30 01:00:19.116: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:19.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:20.105: INFO: rc: 1 Oct 30 01:00:20.105: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:20.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:21.174: INFO: rc: 1 Oct 30 01:00:21.174: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:21.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:22.261: INFO: rc: 1 Oct 30 01:00:22.261: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:22.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:23.091: INFO: rc: 1 Oct 30 01:00:23.092: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:23.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:24.096: INFO: rc: 1 Oct 30 01:00:24.096: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:24.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413' Oct 30 01:00:24.328: INFO: rc: 1 Oct 30 01:00:24.328: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6248 exec execpodmfc69 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32413: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32413 nc: connect to 10.10.190.207 port 32413 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:24.329: FAIL: Unexpected error: <*errors.errorString | 0xc0014e38b0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32413 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32413 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001901980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001901980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001901980, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-6248". STEP: Found 17 events. Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:07 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-m797r Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:07 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-c5pl8 Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:07 +0000 UTC - event for nodeport-test-c5pl8: {default-scheduler } Scheduled: Successfully assigned services-6248/nodeport-test-c5pl8 to node1 Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:07 +0000 UTC - event for nodeport-test-m797r: {default-scheduler } Scheduled: Successfully assigned services-6248/nodeport-test-m797r to node2 Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:08 +0000 UTC - event for nodeport-test-c5pl8: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:09 +0000 UTC - event for nodeport-test-c5pl8: {kubelet node1} Started: Started container nodeport-test Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:09 +0000 UTC - event for nodeport-test-c5pl8: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 301.291978ms Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:09 +0000 UTC - event for nodeport-test-c5pl8: {kubelet node1} Created: Created container nodeport-test Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:09 +0000 UTC - event for nodeport-test-m797r: {kubelet node2} Started: Started container nodeport-test Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:09 +0000 UTC - event for nodeport-test-m797r: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:09 +0000 UTC - event for nodeport-test-m797r: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 307.130034ms Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:09 +0000 UTC - event for nodeport-test-m797r: {kubelet node2} Created: Created container nodeport-test Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:13 +0000 UTC - event for execpodmfc69: {default-scheduler } Scheduled: Successfully assigned services-6248/execpodmfc69 to node2 Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:18 +0000 UTC - event for execpodmfc69: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:19 +0000 UTC - event for execpodmfc69: {kubelet node2} Created: Created container agnhost-container Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:19 +0000 UTC - event for execpodmfc69: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 330.285441ms Oct 30 01:00:24.345: INFO: At 2021-10-30 00:58:20 +0000 UTC - event for execpodmfc69: {kubelet node2} Started: Started container agnhost-container Oct 30 01:00:24.347: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:00:24.347: INFO: execpodmfc69 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:13 +0000 UTC }] Oct 30 01:00:24.348: INFO: nodeport-test-c5pl8 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:07 +0000 UTC }] Oct 30 01:00:24.348: INFO: nodeport-test-m797r node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:07 +0000 UTC }] Oct 30 01:00:24.348: INFO: Oct 30 01:00:24.353: INFO: Logging node info for node master1 Oct 30 01:00:24.356: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 71955 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:18 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:18 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:18 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:00:18 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:00:24.356: INFO: Logging kubelet events for node master1 Oct 30 01:00:24.359: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 01:00:24.368: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.368: INFO: Container kube-scheduler ready: true, restart count 0 Oct 30 01:00:24.368: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.368: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:00:24.368: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.368: INFO: Container coredns ready: true, restart count 1 Oct 30 01:00:24.368: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:24.368: INFO: Container docker-registry ready: true, restart count 0 Oct 30 01:00:24.368: INFO: Container nginx ready: true, restart count 0 Oct 30 01:00:24.368: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:24.368: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:00:24.368: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:00:24.368: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.368: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:00:24.368: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.368: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 30 01:00:24.368: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:00:24.368: INFO: Init container install-cni ready: true, restart count 0 Oct 30 01:00:24.368: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:00:24.368: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.368: INFO: Container kube-multus ready: true, restart count 1 W1030 01:00:24.382205 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:00:24.450: INFO: Latency metrics for node master1 Oct 30 01:00:24.450: INFO: Logging node info for node master2 Oct 30 01:00:24.453: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 71915 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:14 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:14 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:14 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:00:14 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:00:24.453: INFO: Logging kubelet events for node master2 Oct 30 01:00:24.455: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 01:00:24.463: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:24.463: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:00:24.463: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:00:24.463: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.464: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:00:24.464: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.464: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 01:00:24.464: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.464: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:00:24.464: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.464: INFO: Container kube-proxy ready: true, restart count 2 Oct 30 01:00:24.464: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:00:24.464: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:00:24.464: INFO: Container kube-flannel ready: true, restart count 1 Oct 30 01:00:24.464: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.464: INFO: Container kube-multus ready: true, restart count 1 W1030 01:00:24.478101 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:00:24.540: INFO: Latency metrics for node master2 Oct 30 01:00:24.540: INFO: Logging node info for node master3 Oct 30 01:00:24.543: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 72057 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:23 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:23 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:23 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:00:23 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:00:24.544: INFO: Logging kubelet events for node master3 Oct 30 01:00:24.546: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 01:00:24.556: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:24.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:00:24.556: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 01:00:24.556: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:24.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:00:24.556: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:00:24.556: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.556: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 30 01:00:24.556: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.556: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:00:24.556: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:00:24.556: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:00:24.556: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:00:24.556: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.556: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:00:24.556: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.556: INFO: Container coredns ready: true, restart count 1 Oct 30 01:00:24.556: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.556: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:00:24.556: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.556: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:00:24.556: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.556: INFO: Container autoscaler ready: true, restart count 1 Oct 30 01:00:24.556: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.556: INFO: Container nfd-controller ready: true, restart count 0 W1030 01:00:24.568569 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:00:24.645: INFO: Latency metrics for node master3 Oct 30 01:00:24.645: INFO: Logging node info for node node1 Oct 30 01:00:24.648: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 71954 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:18 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:18 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:18 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:00:18 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:00:24.648: INFO: Logging kubelet events for node node1 Oct 30 01:00:24.650: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 01:00:24.664: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.664: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:00:24.664: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:00:24.664: INFO: Container collectd ready: true, restart count 0 Oct 30 01:00:24.664: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:00:24.664: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:00:24.664: INFO: sample-webhook-deployment-78988fc6cd-5tj2h started at 2021-10-30 01:00:22 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.664: INFO: Container sample-webhook ready: false, restart count 0 Oct 30 01:00:24.664: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:24.664: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:00:24.664: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:00:24.664: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:24.664: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:00:24.664: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:00:24.664: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 01:00:24.664: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:00:24.664: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:00:24.664: INFO: Container grafana ready: true, restart count 0 Oct 30 01:00:24.664: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:00:24.664: INFO: nodeport-test-c5pl8 started at 2021-10-30 00:58:07 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.664: INFO: Container nodeport-test ready: true, restart count 0 Oct 30 01:00:24.664: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.664: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:00:24.664: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.664: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:00:24.664: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 01:00:24.664: INFO: Container discover ready: false, restart count 0 Oct 30 01:00:24.664: INFO: Container init ready: false, restart count 0 Oct 30 01:00:24.664: INFO: Container install ready: false, restart count 0 Oct 30 01:00:24.664: INFO: replace-27259260-6sjmk started at 2021-10-30 01:00:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.664: INFO: Container c ready: true, restart count 0 Oct 30 01:00:24.664: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.664: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:00:24.665: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.665: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:00:24.665: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:00:24.665: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:00:24.665: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:00:24.665: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.665: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:00:24.665: INFO: test-recreate-deployment-6cb8b65c46-d467t started at 2021-10-30 01:00:23 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:24.665: INFO: Container agnhost ready: false, restart count 0 W1030 01:00:24.678002 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:00:25.831: INFO: Latency metrics for node node1 Oct 30 01:00:25.831: INFO: Logging node info for node node2 Oct 30 01:00:25.834: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 72037 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:22 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:22 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:22 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:00:22 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:00:25.834: INFO: Logging kubelet events for node node2 Oct 30 01:00:25.837: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 01:00:25.855: INFO: replace-27259259-7v89c started at 2021-10-30 00:59:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.855: INFO: Container c ready: true, restart count 0 Oct 30 01:00:25.855: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.855: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:00:25.855: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.855: INFO: Container tas-extender ready: true, restart count 0 Oct 30 01:00:25.855: INFO: simpletest.deployment-577cc9f676-hsfsd started at 2021-10-30 00:59:37 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.855: INFO: Container nginx ready: true, restart count 0 Oct 30 01:00:25.855: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:25.855: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:00:25.855: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:00:25.855: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:25.855: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:00:25.855: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:00:25.855: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:00:25.855: INFO: Container collectd ready: true, restart count 0 Oct 30 01:00:25.855: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:00:25.855: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:00:25.855: INFO: externalname-service-jjz59 started at 2021-10-30 00:58:27 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.855: INFO: Container externalname-service ready: true, restart count 0 Oct 30 01:00:25.855: INFO: externalname-service-7plvb started at 2021-10-30 00:58:27 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.855: INFO: Container externalname-service ready: true, restart count 0 Oct 30 01:00:25.855: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.855: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:00:25.855: INFO: execpodptmfk started at 2021-10-30 00:58:33 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.855: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 01:00:25.855: INFO: pod-secrets-d98414fa-6698-4fd4-ad68-902ae4f1233d started at 2021-10-30 00:59:03 +0000 UTC (0+3 container statuses recorded) Oct 30 01:00:25.855: INFO: Container creates-volume-test ready: false, restart count 0 Oct 30 01:00:25.855: INFO: Container dels-volume-test ready: false, restart count 0 Oct 30 01:00:25.855: INFO: Container upds-volume-test ready: false, restart count 0 Oct 30 01:00:25.855: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.855: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:00:25.855: INFO: simpletest.deployment-577cc9f676-nrdfl started at 2021-10-30 00:59:37 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.856: INFO: Container nginx ready: true, restart count 0 Oct 30 01:00:25.856: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.856: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:00:25.856: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:00:25.856: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:00:25.856: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:00:25.856: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.856: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:00:25.856: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 01:00:25.856: INFO: Container discover ready: false, restart count 0 Oct 30 01:00:25.856: INFO: Container init ready: false, restart count 0 Oct 30 01:00:25.856: INFO: Container install ready: false, restart count 0 Oct 30 01:00:25.856: INFO: nodeport-test-m797r started at 2021-10-30 00:58:07 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.856: INFO: Container nodeport-test ready: true, restart count 0 Oct 30 01:00:25.856: INFO: execpodmfc69 started at 2021-10-30 00:58:13 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.856: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 01:00:25.856: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.856: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:00:25.856: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:25.856: INFO: Container kubernetes-dashboard ready: true, restart count 1 W1030 01:00:25.870045 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:00:26.113: INFO: Latency metrics for node node2 Oct 30 01:00:26.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6248" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [138.935 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:00:24.329: Unexpected error: <*errors.errorString | 0xc0014e38b0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32413 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32413 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":15,"skipped":326,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:23.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:00:23.359: INFO: Creating deployment "test-recreate-deployment" Oct 30 01:00:23.362: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Oct 30 01:00:23.368: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Oct 30 01:00:25.373: INFO: Waiting deployment "test-recreate-deployment" to complete Oct 30 01:00:25.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152423, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152423, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152423, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152423, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:00:27.379: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Oct 30 01:00:27.386: INFO: Updating deployment test-recreate-deployment Oct 30 01:00:27.386: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 30 01:00:27.427: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6823 e2a6954b-365e-496f-b9b3-8fc0bdb305e2 72169 2 2021-10-30 01:00:23 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-30 01:00:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 01:00:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00499ba68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-30 01:00:27 +0000 UTC,LastTransitionTime:2021-10-30 01:00:27 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-10-30 01:00:27 +0000 UTC,LastTransitionTime:2021-10-30 01:00:23 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Oct 30 01:00:27.430: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-6823 5c07884a-20cd-4598-a00b-c9c98f6ab510 72167 1 2021-10-30 01:00:27 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment e2a6954b-365e-496f-b9b3-8fc0bdb305e2 0xc00499bee0 0xc00499bee1}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:00:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e2a6954b-365e-496f-b9b3-8fc0bdb305e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00499bf58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:00:27.430: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Oct 30 01:00:27.430: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-6823 403b37e3-f54c-43e5-a6cb-c3595ddab131 72157 2 2021-10-30 01:00:23 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment e2a6954b-365e-496f-b9b3-8fc0bdb305e2 0xc00499bde7 0xc00499bde8}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:00:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e2a6954b-365e-496f-b9b3-8fc0bdb305e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00499be78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:00:27.433: INFO: Pod "test-recreate-deployment-85d47dcb4-mp5mc" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-mp5mc test-recreate-deployment-85d47dcb4- deployment-6823 dde74060-bb69-4715-bebd-a0a4a790be2d 72170 0 2021-10-30 01:00:27 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 5c07884a-20cd-4598-a00b-c9c98f6ab510 0xc000b7a38f 0xc000b7a3a0}] [] [{kube-controller-manager Update v1 2021-10-30 01:00:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5c07884a-20cd-4598-a00b-c9c98f6ab510\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-30 01:00:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wsgz7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wsgz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:00:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:00:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:00:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:00:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-10-30 01:00:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:27.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6823" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":36,"skipped":559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:22.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:00:22.894: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:00:24.904: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152422, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152422, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152422, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152422, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:00:27.913: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:27.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9534" for this suite. STEP: Destroying namespace "webhook-9534-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.703 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":17,"skipped":345,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:59:30.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Oct 30 00:59:30.861: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3208 fbbce118-6c05-4523-8194-dc2b8217d439 71394 0 2021-10-30 00:59:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 00:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 00:59:30.862: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3208 fbbce118-6c05-4523-8194-dc2b8217d439 71394 0 2021-10-30 00:59:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 00:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Oct 30 00:59:40.872: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3208 fbbce118-6c05-4523-8194-dc2b8217d439 71544 0 2021-10-30 00:59:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 00:59:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 00:59:40.872: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3208 fbbce118-6c05-4523-8194-dc2b8217d439 71544 0 2021-10-30 00:59:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 00:59:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Oct 30 00:59:50.881: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3208 fbbce118-6c05-4523-8194-dc2b8217d439 71583 0 2021-10-30 00:59:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 00:59:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 00:59:50.882: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3208 fbbce118-6c05-4523-8194-dc2b8217d439 71583 0 2021-10-30 00:59:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 00:59:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Oct 30 01:00:00.887: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3208 fbbce118-6c05-4523-8194-dc2b8217d439 71789 0 2021-10-30 00:59:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 00:59:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:00:00.887: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3208 fbbce118-6c05-4523-8194-dc2b8217d439 71789 0 2021-10-30 00:59:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 00:59:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Oct 30 01:00:10.895: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3208 51ec35d5-893c-4784-b64b-a4b9e46f807a 71878 0 2021-10-30 01:00:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-30 01:00:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:00:10.895: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3208 51ec35d5-893c-4784-b64b-a4b9e46f807a 71878 0 2021-10-30 01:00:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-30 01:00:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Oct 30 01:00:20.901: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3208 51ec35d5-893c-4784-b64b-a4b9e46f807a 71988 0 2021-10-30 01:00:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-30 01:00:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:00:20.901: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3208 51ec35d5-893c-4784-b64b-a4b9e46f807a 71988 0 2021-10-30 01:00:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-30 01:00:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:30.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3208" for this suite. • [SLOW TEST:60.082 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":7,"skipped":176,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:27.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:00:27.523: INFO: The status of Pod busybox-host-aliases16071bc4-3fee-4274-8670-8535c8429e86 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:00:29.528: INFO: The status of Pod busybox-host-aliases16071bc4-3fee-4274-8670-8535c8429e86 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:00:31.527: INFO: The status of Pod busybox-host-aliases16071bc4-3fee-4274-8670-8535c8429e86 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:00:33.527: INFO: The status of Pod busybox-host-aliases16071bc4-3fee-4274-8670-8535c8429e86 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:33.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1217" for this suite. • [SLOW TEST:6.052 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox Pod with hostAliases /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":586,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:30.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-ac500f29-d582-48b4-a0fb-920eacd23a61 STEP: Creating a pod to test consume secrets Oct 30 01:00:30.951: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b2333ac1-cd55-4760-8bd1-ce64e2b68812" in namespace "projected-8125" to be "Succeeded or Failed" Oct 30 01:00:30.953: INFO: Pod "pod-projected-secrets-b2333ac1-cd55-4760-8bd1-ce64e2b68812": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131576ms Oct 30 01:00:32.958: INFO: Pod "pod-projected-secrets-b2333ac1-cd55-4760-8bd1-ce64e2b68812": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007246303s Oct 30 01:00:34.962: INFO: Pod "pod-projected-secrets-b2333ac1-cd55-4760-8bd1-ce64e2b68812": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011173021s Oct 30 01:00:36.967: INFO: Pod "pod-projected-secrets-b2333ac1-cd55-4760-8bd1-ce64e2b68812": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015672389s STEP: Saw pod success Oct 30 01:00:36.967: INFO: Pod "pod-projected-secrets-b2333ac1-cd55-4760-8bd1-ce64e2b68812" satisfied condition "Succeeded or Failed" Oct 30 01:00:36.970: INFO: Trying to get logs from node node1 pod pod-projected-secrets-b2333ac1-cd55-4760-8bd1-ce64e2b68812 container projected-secret-volume-test: STEP: delete the pod Oct 30 01:00:36.985: INFO: Waiting for pod pod-projected-secrets-b2333ac1-cd55-4760-8bd1-ce64e2b68812 to disappear Oct 30 01:00:36.986: INFO: Pod pod-projected-secrets-b2333ac1-cd55-4760-8bd1-ce64e2b68812 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:36.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8125" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":179,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:59:37.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1030 00:59:39.007572 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:00:41.024: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:41.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6750" for this suite. • [SLOW TEST:63.094 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":27,"skipped":378,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:58:27.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-983 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-983 I1030 00:58:27.720535 28 runners.go:190] Created replication controller with name: externalname-service, namespace: services-983, replica count: 2 I1030 00:58:30.771782 28 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 00:58:33.772489 28 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 00:58:33.772: INFO: Creating new exec pod Oct 30 00:58:38.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 30 00:58:39.135: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 30 00:58:39.135: INFO: stdout: "externalname-service-jjz59" Oct 30 00:58:39.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.22.100 80' Oct 30 00:58:39.436: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.22.100 80\nConnection to 10.233.22.100 80 port [tcp/http] succeeded!\n" Oct 30 00:58:39.436: INFO: stdout: "externalname-service-jjz59" Oct 30 00:58:39.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:39.867: INFO: rc: 1 Oct 30 00:58:39.867: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:40.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:41.107: INFO: rc: 1 Oct 30 00:58:41.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:41.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:42.347: INFO: rc: 1 Oct 30 00:58:42.347: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:42.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:43.156: INFO: rc: 1 Oct 30 00:58:43.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:43.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:44.125: INFO: rc: 1 Oct 30 00:58:44.125: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:44.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:45.126: INFO: rc: 1 Oct 30 00:58:45.126: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:45.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:46.124: INFO: rc: 1 Oct 30 00:58:46.124: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:46.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:47.172: INFO: rc: 1 Oct 30 00:58:47.172: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:47.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:48.123: INFO: rc: 1 Oct 30 00:58:48.123: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:48.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:49.121: INFO: rc: 1 Oct 30 00:58:49.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:49.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:50.114: INFO: rc: 1 Oct 30 00:58:50.114: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:50.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:51.140: INFO: rc: 1 Oct 30 00:58:51.140: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:51.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:52.110: INFO: rc: 1 Oct 30 00:58:52.110: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:52.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:53.125: INFO: rc: 1 Oct 30 00:58:53.125: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:53.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:54.118: INFO: rc: 1 Oct 30 00:58:54.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:54.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:55.108: INFO: rc: 1 Oct 30 00:58:55.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:55.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:56.126: INFO: rc: 1 Oct 30 00:58:56.126: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:56.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:57.112: INFO: rc: 1 Oct 30 00:58:57.112: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:57.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:58.120: INFO: rc: 1 Oct 30 00:58:58.120: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:58.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:58:59.111: INFO: rc: 1 Oct 30 00:58:59.111: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:58:59.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:00.133: INFO: rc: 1 Oct 30 00:59:00.133: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:00.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:01.481: INFO: rc: 1 Oct 30 00:59:01.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:01.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:02.236: INFO: rc: 1 Oct 30 00:59:02.236: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:02.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:03.100: INFO: rc: 1 Oct 30 00:59:03.100: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo+ hostName nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:03.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:04.495: INFO: rc: 1 Oct 30 00:59:04.495: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:04.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:05.111: INFO: rc: 1 Oct 30 00:59:05.111: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:05.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:06.141: INFO: rc: 1 Oct 30 00:59:06.141: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:06.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:07.115: INFO: rc: 1 Oct 30 00:59:07.115: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:07.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:08.108: INFO: rc: 1 Oct 30 00:59:08.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:08.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:09.137: INFO: rc: 1 Oct 30 00:59:09.137: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:09.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:10.246: INFO: rc: 1 Oct 30 00:59:10.246: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:10.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:12.024: INFO: rc: 1 Oct 30 00:59:12.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:12.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:13.126: INFO: rc: 1 Oct 30 00:59:13.127: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:13.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:14.112: INFO: rc: 1 Oct 30 00:59:14.112: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:14.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:15.116: INFO: rc: 1 Oct 30 00:59:15.116: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:15.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:16.121: INFO: rc: 1 Oct 30 00:59:16.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:16.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:17.173: INFO: rc: 1 Oct 30 00:59:17.173: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:17.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:18.122: INFO: rc: 1 Oct 30 00:59:18.122: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:18.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:19.104: INFO: rc: 1 Oct 30 00:59:19.104: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:19.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:20.128: INFO: rc: 1 Oct 30 00:59:20.128: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:20.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:21.124: INFO: rc: 1 Oct 30 00:59:21.124: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:21.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:22.199: INFO: rc: 1 Oct 30 00:59:22.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:22.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:23.113: INFO: rc: 1 Oct 30 00:59:23.113: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:23.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:24.132: INFO: rc: 1 Oct 30 00:59:24.133: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:24.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:25.093: INFO: rc: 1 Oct 30 00:59:25.093: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:25.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:26.130: INFO: rc: 1 Oct 30 00:59:26.130: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:26.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:27.212: INFO: rc: 1 Oct 30 00:59:27.212: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:27.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:28.133: INFO: rc: 1 Oct 30 00:59:28.133: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:28.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:29.144: INFO: rc: 1 Oct 30 00:59:29.144: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:29.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:30.117: INFO: rc: 1 Oct 30 00:59:30.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:30.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:31.134: INFO: rc: 1 Oct 30 00:59:31.134: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:31.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:32.125: INFO: rc: 1 Oct 30 00:59:32.125: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:32.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:33.155: INFO: rc: 1 Oct 30 00:59:33.155: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30574 + echo hostName nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:33.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:34.226: INFO: rc: 1 Oct 30 00:59:34.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:34.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:35.120: INFO: rc: 1 Oct 30 00:59:35.120: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:35.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:36.153: INFO: rc: 1 Oct 30 00:59:36.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:36.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:37.094: INFO: rc: 1 Oct 30 00:59:37.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:37.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:38.115: INFO: rc: 1 Oct 30 00:59:38.115: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:38.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:39.471: INFO: rc: 1 Oct 30 00:59:39.471: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:39.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:40.268: INFO: rc: 1 Oct 30 00:59:40.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:40.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:42.041: INFO: rc: 1 Oct 30 00:59:42.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:42.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:43.150: INFO: rc: 1 Oct 30 00:59:43.150: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:43.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:44.157: INFO: rc: 1 Oct 30 00:59:44.157: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:44.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:45.130: INFO: rc: 1 Oct 30 00:59:45.130: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:45.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:46.124: INFO: rc: 1 Oct 30 00:59:46.124: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:46.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:47.173: INFO: rc: 1 Oct 30 00:59:47.173: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:47.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:48.142: INFO: rc: 1 Oct 30 00:59:48.142: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:48.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:49.138: INFO: rc: 1 Oct 30 00:59:49.139: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:49.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:50.105: INFO: rc: 1 Oct 30 00:59:50.105: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:50.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:51.144: INFO: rc: 1 Oct 30 00:59:51.144: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:51.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:52.102: INFO: rc: 1 Oct 30 00:59:52.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:52.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:53.166: INFO: rc: 1 Oct 30 00:59:53.166: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:53.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:54.398: INFO: rc: 1 Oct 30 00:59:54.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:54.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:55.238: INFO: rc: 1 Oct 30 00:59:55.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:55.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:56.111: INFO: rc: 1 Oct 30 00:59:56.111: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:56.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:57.125: INFO: rc: 1 Oct 30 00:59:57.125: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:57.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:58.102: INFO: rc: 1 Oct 30 00:59:58.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30574 + echo hostName nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:58.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 00:59:59.106: INFO: rc: 1 Oct 30 00:59:59.106: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 00:59:59.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:00.101: INFO: rc: 1 Oct 30 01:00:00.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:00.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:01.100: INFO: rc: 1 Oct 30 01:00:01.100: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:01.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:02.136: INFO: rc: 1 Oct 30 01:00:02.136: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:02.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:03.103: INFO: rc: 1 Oct 30 01:00:03.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:03.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:04.094: INFO: rc: 1 Oct 30 01:00:04.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:04.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:05.100: INFO: rc: 1 Oct 30 01:00:05.100: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:05.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:06.118: INFO: rc: 1 Oct 30 01:00:06.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:06.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:07.151: INFO: rc: 1 Oct 30 01:00:07.151: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:07.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:08.113: INFO: rc: 1 Oct 30 01:00:08.113: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:08.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:09.113: INFO: rc: 1 Oct 30 01:00:09.113: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:09.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:10.118: INFO: rc: 1 Oct 30 01:00:10.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:10.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:12.110: INFO: rc: 1 Oct 30 01:00:12.110: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:12.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:13.124: INFO: rc: 1 Oct 30 01:00:13.124: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:13.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:14.117: INFO: rc: 1 Oct 30 01:00:14.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:14.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:15.109: INFO: rc: 1 Oct 30 01:00:15.110: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:15.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:16.109: INFO: rc: 1 Oct 30 01:00:16.109: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:16.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:17.131: INFO: rc: 1 Oct 30 01:00:17.131: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:17.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:18.125: INFO: rc: 1 Oct 30 01:00:18.125: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:18.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:19.121: INFO: rc: 1 Oct 30 01:00:19.122: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:19.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:20.108: INFO: rc: 1 Oct 30 01:00:20.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:20.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:21.173: INFO: rc: 1 Oct 30 01:00:21.174: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:21.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:22.260: INFO: rc: 1 Oct 30 01:00:22.260: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:22.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:23.094: INFO: rc: 1 Oct 30 01:00:23.095: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:23.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:24.098: INFO: rc: 1 Oct 30 01:00:24.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:24.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:25.124: INFO: rc: 1 Oct 30 01:00:25.124: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:25.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:26.108: INFO: rc: 1 Oct 30 01:00:26.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:26.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:27.106: INFO: rc: 1 Oct 30 01:00:27.106: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:27.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:28.119: INFO: rc: 1 Oct 30 01:00:28.119: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:28.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:29.101: INFO: rc: 1 Oct 30 01:00:29.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:29.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:30.121: INFO: rc: 1 Oct 30 01:00:30.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:30.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:31.114: INFO: rc: 1 Oct 30 01:00:31.114: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:31.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:32.132: INFO: rc: 1 Oct 30 01:00:32.132: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:32.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:33.105: INFO: rc: 1 Oct 30 01:00:33.105: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:33.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:34.135: INFO: rc: 1 Oct 30 01:00:34.135: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:34.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:35.325: INFO: rc: 1 Oct 30 01:00:35.325: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:35.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:36.097: INFO: rc: 1 Oct 30 01:00:36.097: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:36.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:37.127: INFO: rc: 1 Oct 30 01:00:37.127: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:37.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:38.093: INFO: rc: 1 Oct 30 01:00:38.093: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:38.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:39.095: INFO: rc: 1 Oct 30 01:00:39.095: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:39.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:40.103: INFO: rc: 1 Oct 30 01:00:40.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:40.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574' Oct 30 01:00:40.361: INFO: rc: 1 Oct 30 01:00:40.361: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-983 exec execpodptmfk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30574: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30574 nc: connect to 10.10.190.207 port 30574 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:00:40.362: FAIL: Unexpected error: <*errors.errorString | 0xc0056d8030>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30574 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30574 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00139d980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00139d980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00139d980, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 30 01:00:40.363: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-983". STEP: Found 17 events. Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:27 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-jjz59 Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:27 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-7plvb Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:27 +0000 UTC - event for externalname-service-7plvb: {default-scheduler } Scheduled: Successfully assigned services-983/externalname-service-7plvb to node2 Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:27 +0000 UTC - event for externalname-service-jjz59: {default-scheduler } Scheduled: Successfully assigned services-983/externalname-service-jjz59 to node2 Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:29 +0000 UTC - event for externalname-service-7plvb: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 336.419979ms Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:29 +0000 UTC - event for externalname-service-7plvb: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:30 +0000 UTC - event for externalname-service-7plvb: {kubelet node2} Started: Started container externalname-service Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:30 +0000 UTC - event for externalname-service-7plvb: {kubelet node2} Created: Created container externalname-service Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:30 +0000 UTC - event for externalname-service-jjz59: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:31 +0000 UTC - event for externalname-service-jjz59: {kubelet node2} Started: Started container externalname-service Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:31 +0000 UTC - event for externalname-service-jjz59: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 304.694603ms Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:31 +0000 UTC - event for externalname-service-jjz59: {kubelet node2} Created: Created container externalname-service Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:33 +0000 UTC - event for execpodptmfk: {default-scheduler } Scheduled: Successfully assigned services-983/execpodptmfk to node2 Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:35 +0000 UTC - event for execpodptmfk: {kubelet node2} Created: Created container agnhost-container Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:35 +0000 UTC - event for execpodptmfk: {kubelet node2} Started: Started container agnhost-container Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:35 +0000 UTC - event for execpodptmfk: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:00:40.376: INFO: At 2021-10-30 00:58:35 +0000 UTC - event for execpodptmfk: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 302.310376ms Oct 30 01:00:40.379: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:00:40.379: INFO: execpodptmfk node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:33 +0000 UTC }] Oct 30 01:00:40.379: INFO: externalname-service-7plvb node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:27 +0000 UTC }] Oct 30 01:00:40.379: INFO: externalname-service-jjz59 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:27 +0000 UTC }] Oct 30 01:00:40.379: INFO: Oct 30 01:00:40.384: INFO: Logging node info for node master1 Oct 30 01:00:40.386: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 72421 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:38 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:38 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:38 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:00:38 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:00:40.387: INFO: Logging kubelet events for node master1 Oct 30 01:00:40.389: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 01:00:40.409: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.409: INFO: Container coredns ready: true, restart count 1 Oct 30 01:00:40.409: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:40.409: INFO: Container docker-registry ready: true, restart count 0 Oct 30 01:00:40.409: INFO: Container nginx ready: true, restart count 0 Oct 30 01:00:40.409: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:40.409: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:00:40.410: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:00:40.410: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.410: INFO: Container kube-scheduler ready: true, restart count 0 Oct 30 01:00:40.410: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.410: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:00:40.410: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:00:40.410: INFO: Init container install-cni ready: true, restart count 0 Oct 30 01:00:40.410: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:00:40.410: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.410: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:00:40.410: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.410: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:00:40.410: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.410: INFO: Container kube-controller-manager ready: true, restart count 2 W1030 01:00:40.424649 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:00:40.504: INFO: Latency metrics for node master1 Oct 30 01:00:40.504: INFO: Logging node info for node master2 Oct 30 01:00:40.506: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 72383 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:34 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:34 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:34 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:00:34 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:00:40.507: INFO: Logging kubelet events for node master2 Oct 30 01:00:40.509: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 01:00:40.517: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.517: INFO: Container kube-proxy ready: true, restart count 2 Oct 30 01:00:40.517: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:00:40.518: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:00:40.518: INFO: Container kube-flannel ready: true, restart count 1 Oct 30 01:00:40.518: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.518: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:00:40.518: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:40.518: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:00:40.518: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:00:40.518: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.518: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:00:40.518: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.518: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 01:00:40.518: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.518: INFO: Container kube-scheduler ready: true, restart count 2 W1030 01:00:40.531173 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:00:40.595: INFO: Latency metrics for node master2 Oct 30 01:00:40.595: INFO: Logging node info for node master3 Oct 30 01:00:40.597: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 72343 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:33 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:33 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:33 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:00:33 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:00:40.598: INFO: Logging kubelet events for node master3 Oct 30 01:00:40.601: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 01:00:40.611: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.611: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 30 01:00:40.611: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.611: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:00:40.611: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:00:40.611: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:00:40.611: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:00:40.611: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.612: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:00:40.612: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.612: INFO: Container coredns ready: true, restart count 1 Oct 30 01:00:40.612: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:40.612: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:00:40.612: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 01:00:40.612: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:40.612: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:00:40.612: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:00:40.612: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.612: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:00:40.612: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.612: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:00:40.612: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.612: INFO: Container autoscaler ready: true, restart count 1 Oct 30 01:00:40.612: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.612: INFO: Container nfd-controller ready: true, restart count 0 W1030 01:00:40.627515 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:00:40.716: INFO: Latency metrics for node master3 Oct 30 01:00:40.716: INFO: Logging node info for node node1 Oct 30 01:00:40.719: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 72434 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:38 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:38 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:38 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:00:38 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:00:40.720: INFO: Logging kubelet events for node node1 Oct 30 01:00:40.723: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 01:00:40.737: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:00:40.737: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:00:40.737: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:00:40.737: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.737: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:00:40.737: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.737: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:00:40.737: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:00:40.737: INFO: Container collectd ready: true, restart count 0 Oct 30 01:00:40.737: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:00:40.737: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:00:40.737: INFO: busybox-host-aliases16071bc4-3fee-4274-8670-8535c8429e86 started at 2021-10-30 01:00:27 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.737: INFO: Container busybox-host-aliases16071bc4-3fee-4274-8670-8535c8429e86 ready: true, restart count 0 Oct 30 01:00:40.737: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.737: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:00:40.737: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.737: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:00:40.737: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 01:00:40.737: INFO: Container discover ready: false, restart count 0 Oct 30 01:00:40.737: INFO: Container init ready: false, restart count 0 Oct 30 01:00:40.737: INFO: Container install ready: false, restart count 0 Oct 30 01:00:40.737: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:40.737: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:00:40.737: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:00:40.737: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:40.737: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:00:40.737: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:00:40.738: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 01:00:40.738: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:00:40.738: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:00:40.738: INFO: Container grafana ready: true, restart count 0 Oct 30 01:00:40.738: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:00:40.738: INFO: pod-adoption-release started at 2021-10-30 01:00:37 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.738: INFO: Container pod-adoption-release ready: true, restart count 0 Oct 30 01:00:40.738: INFO: replace-27259260-6sjmk started at 2021-10-30 01:00:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.738: INFO: Container c ready: false, restart count 0 Oct 30 01:00:40.738: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.738: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:00:40.738: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.738: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:00:40.738: INFO: sample-webhook-deployment-78988fc6cd-xdchd started at 2021-10-30 01:00:26 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:40.738: INFO: Container sample-webhook ready: true, restart count 0 W1030 01:00:40.754070 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:00:41.065: INFO: Latency metrics for node node1 Oct 30 01:00:41.065: INFO: Logging node info for node node2 Oct 30 01:00:41.068: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 72316 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:32 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:32 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:00:32 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:00:32 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:00:41.069: INFO: Logging kubelet events for node node2 Oct 30 01:00:41.071: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 01:00:41.085: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:00:41.085: INFO: simpletest.deployment-577cc9f676-nrdfl started at 2021-10-30 00:59:37 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container nginx ready: true, restart count 0 Oct 30 01:00:41.085: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:00:41.085: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:00:41.085: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:00:41.085: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:00:41.085: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:00:41.085: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:00:41.085: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 01:00:41.085: INFO: Container discover ready: false, restart count 0 Oct 30 01:00:41.085: INFO: Container init ready: false, restart count 0 Oct 30 01:00:41.085: INFO: Container install ready: false, restart count 0 Oct 30 01:00:41.085: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:00:41.085: INFO: replace-27259259-7v89c started at 2021-10-30 00:59:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container c ready: false, restart count 0 Oct 30 01:00:41.085: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:41.085: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:00:41.085: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:00:41.085: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:00:41.085: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:00:41.085: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:00:41.085: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container tas-extender ready: true, restart count 0 Oct 30 01:00:41.085: INFO: simpletest.deployment-577cc9f676-hsfsd started at 2021-10-30 00:59:37 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container nginx ready: true, restart count 0 Oct 30 01:00:41.085: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:00:41.085: INFO: Container collectd ready: true, restart count 0 Oct 30 01:00:41.085: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:00:41.085: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:00:41.085: INFO: externalname-service-jjz59 started at 2021-10-30 00:58:27 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container externalname-service ready: true, restart count 0 Oct 30 01:00:41.085: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:00:41.085: INFO: liveness-61f35554-8f4b-4b92-a431-819be75359d8 started at 2021-10-30 01:00:33 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 01:00:41.085: INFO: externalname-service-7plvb started at 2021-10-30 00:58:27 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container externalname-service ready: true, restart count 0 Oct 30 01:00:41.085: INFO: execpodptmfk started at 2021-10-30 00:58:33 +0000 UTC (0+1 container statuses recorded) Oct 30 01:00:41.085: INFO: Container agnhost-container ready: true, restart count 0 W1030 01:00:41.108079 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:00:42.579: INFO: Latency metrics for node node2 Oct 30 01:00:42.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-983" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [134.907 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:00:40.362: Unexpected error: <*errors.errorString | 0xc0056d8030>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30574 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30574 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":10,"skipped":266,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:37.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created Oct 30 01:00:37.055: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:00:39.059: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:00:41.060: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Oct 30 01:00:42.072: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:43.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5588" for this suite. • [SLOW TEST:6.077 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":9,"skipped":190,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:26.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Oct 30 01:00:26.619: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:00:26.633: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:00:28.641: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152426, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152426, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152426, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152426, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:00:31.653: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:43.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3325" for this suite. STEP: Destroying namespace "webhook-3325-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.630 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":16,"skipped":330,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:43.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:47.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-47" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":17,"skipped":346,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:41.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 30 01:00:41.083: INFO: The status of Pod annotationupdate00494766-3d34-44cc-a25d-8b6dd370199d is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:00:43.087: INFO: The status of Pod annotationupdate00494766-3d34-44cc-a25d-8b6dd370199d is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:00:45.087: INFO: The status of Pod annotationupdate00494766-3d34-44cc-a25d-8b6dd370199d is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:00:47.087: INFO: The status of Pod annotationupdate00494766-3d34-44cc-a25d-8b6dd370199d is Running (Ready = true) Oct 30 01:00:47.606: INFO: Successfully updated pod "annotationupdate00494766-3d34-44cc-a25d-8b6dd370199d" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:49.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4042" for this suite. • [SLOW TEST:8.580 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:42.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Oct 30 01:00:46.681: INFO: &Pod{ObjectMeta:{send-events-f9e4ee9d-1894-46aa-85c7-f06c6e17c2a6 events-1995 e1ac8e60-8986-40ef-9aeb-493550f350b5 72579 0 2021-10-30 01:00:42 +0000 UTC map[name:foo time:659295758] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.192" ], "mac": "1e:fb:55:58:b6:c3", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.192" ], "mac": "1e:fb:55:58:b6:c3", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-10-30 01:00:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:00:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:00:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.192\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zblbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zblbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:00:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:00:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:00:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.192,StartTime:2021-10-30 01:00:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:00:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://9d2b1745cbafc5d05a68c70c3b1fddaa36054219689f5be618a0782278937aa4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.192,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Oct 30 01:00:48.685: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Oct 30 01:00:50.688: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:50.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1995" for this suite. • [SLOW TEST:8.061 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":11,"skipped":289,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:43.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-93ad449a-fe94-4eb2-ad76-674e03b9528b STEP: Creating the pod Oct 30 01:00:43.165: INFO: The status of Pod pod-projected-configmaps-a6460231-762f-4d99-8196-bef3f6a5d73b is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:00:45.168: INFO: The status of Pod pod-projected-configmaps-a6460231-762f-4d99-8196-bef3f6a5d73b is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:00:47.205: INFO: The status of Pod pod-projected-configmaps-a6460231-762f-4d99-8196-bef3f6a5d73b is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-93ad449a-fe94-4eb2-ad76-674e03b9528b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:51.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-129" for this suite. • [SLOW TEST:8.120 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":204,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":384,"failed":0} [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:49.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-25d70210-e1f0-4d63-9356-2fa49dad0035 STEP: Creating a pod to test consume secrets Oct 30 01:00:49.664: INFO: Waiting up to 5m0s for pod "pod-secrets-6e6ba79c-c10f-4755-aa40-c515fb532974" in namespace "secrets-9562" to be "Succeeded or Failed" Oct 30 01:00:49.667: INFO: Pod "pod-secrets-6e6ba79c-c10f-4755-aa40-c515fb532974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.407054ms Oct 30 01:00:51.669: INFO: Pod "pod-secrets-6e6ba79c-c10f-4755-aa40-c515fb532974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004679144s Oct 30 01:00:53.672: INFO: Pod "pod-secrets-6e6ba79c-c10f-4755-aa40-c515fb532974": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007778648s STEP: Saw pod success Oct 30 01:00:53.672: INFO: Pod "pod-secrets-6e6ba79c-c10f-4755-aa40-c515fb532974" satisfied condition "Succeeded or Failed" Oct 30 01:00:53.674: INFO: Trying to get logs from node node2 pod pod-secrets-6e6ba79c-c10f-4755-aa40-c515fb532974 container secret-volume-test: STEP: delete the pod Oct 30 01:00:53.708: INFO: Waiting for pod pod-secrets-6e6ba79c-c10f-4755-aa40-c515fb532974 to disappear Oct 30 01:00:53.710: INFO: Pod pod-secrets-6e6ba79c-c10f-4755-aa40-c515fb532974 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:53.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9562" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":384,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:53.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image Oct 30 01:00:53.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6564 create -f -' Oct 30 01:00:54.118: INFO: stderr: "" Oct 30 01:00:54.118: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Oct 30 01:00:54.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6564 diff -f -' Oct 30 01:00:54.430: INFO: rc: 1 Oct 30 01:00:54.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6564 delete -f -' Oct 30 01:00:54.581: INFO: stderr: "" Oct 30 01:00:54.581: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:54.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6564" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":30,"skipped":401,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:54.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Oct 30 01:00:54.631: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Oct 30 01:00:54.644: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:54.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-9735" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":31,"skipped":405,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:28.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Oct 30 01:00:28.023: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:00:36.575: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:00:54.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1885" for this suite. • [SLOW TEST:26.795 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":18,"skipped":373,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:54.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 30 01:00:54.743: INFO: Waiting up to 5m0s for pod "pod-ab6736fd-d1ee-4eba-a984-e931e3b86f26" in namespace "emptydir-7769" to be "Succeeded or Failed" Oct 30 01:00:54.744: INFO: Pod "pod-ab6736fd-d1ee-4eba-a984-e931e3b86f26": Phase="Pending", Reason="", readiness=false. Elapsed: 1.879836ms Oct 30 01:00:56.747: INFO: Pod "pod-ab6736fd-d1ee-4eba-a984-e931e3b86f26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004885572s Oct 30 01:00:58.751: INFO: Pod "pod-ab6736fd-d1ee-4eba-a984-e931e3b86f26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007907685s Oct 30 01:01:00.754: INFO: Pod "pod-ab6736fd-d1ee-4eba-a984-e931e3b86f26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.010955713s STEP: Saw pod success Oct 30 01:01:00.754: INFO: Pod "pod-ab6736fd-d1ee-4eba-a984-e931e3b86f26" satisfied condition "Succeeded or Failed" Oct 30 01:01:00.756: INFO: Trying to get logs from node node2 pod pod-ab6736fd-d1ee-4eba-a984-e931e3b86f26 container test-container: STEP: delete the pod Oct 30 01:01:00.768: INFO: Waiting for pod pod-ab6736fd-d1ee-4eba-a984-e931e3b86f26 to disappear Oct 30 01:01:00.772: INFO: Pod pod-ab6736fd-d1ee-4eba-a984-e931e3b86f26 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:00.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7769" for this suite. • [SLOW TEST:6.069 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":434,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:54.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:00:54.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-121 create -f -' Oct 30 01:00:55.197: INFO: stderr: "" Oct 30 01:00:55.197: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Oct 30 01:00:55.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-121 create -f -' Oct 30 01:00:55.529: INFO: stderr: "" Oct 30 01:00:55.529: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 30 01:00:56.532: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:00:56.532: INFO: Found 0 / 1 Oct 30 01:00:57.532: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:00:57.532: INFO: Found 0 / 1 Oct 30 01:00:58.533: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:00:58.533: INFO: Found 0 / 1 Oct 30 01:00:59.532: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:00:59.533: INFO: Found 0 / 1 Oct 30 01:01:00.535: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:01:00.535: INFO: Found 1 / 1 Oct 30 01:01:00.535: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 30 01:01:00.538: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:01:00.538: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 30 01:01:00.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-121 describe pod agnhost-primary-ndjr5' Oct 30 01:01:00.705: INFO: stderr: "" Oct 30 01:01:00.705: INFO: stdout: "Name: agnhost-primary-ndjr5\nNamespace: kubectl-121\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Sat, 30 Oct 2021 01:00:55 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.40\"\n ],\n \"mac\": \"4a:a8:d6:f8:6b:f7\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.40\"\n ],\n \"mac\": \"4a:a8:d6:f8:6b:f7\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.4.40\nIPs:\n IP: 10.244.4.40\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://b8b881ce4a06624f2c2a581a27506fa4d387adb7c1c1a367a45f5b1e69ed4de1\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 30 Oct 2021 01:00:59 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xstcq (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-xstcq:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-121/agnhost-primary-ndjr5 to node2\n Normal Pulling 2s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n Normal Pulled 1s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 337.247924ms\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Oct 30 01:01:00.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-121 describe rc agnhost-primary' Oct 30 01:01:00.892: INFO: stderr: "" Oct 30 01:01:00.892: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-121\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-primary-ndjr5\n" Oct 30 01:01:00.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-121 describe service agnhost-primary' Oct 30 01:01:01.060: INFO: stderr: "" Oct 30 01:01:01.061: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-121\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.233.22.33\nIPs: 10.233.22.33\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.4.40:6379\nSession Affinity: None\nEvents: \n" Oct 30 01:01:01.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-121 describe node master1' Oct 30 01:01:01.251: INFO: stderr: "" Oct 30 01:01:01.251: INFO: stdout: "Name: master1\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 29 Oct 2021 21:05:34 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Sat, 30 Oct 2021 01:00:57 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 29 Oct 2021 21:11:27 +0000 Fri, 29 Oct 2021 21:11:27 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Sat, 30 Oct 2021 01:00:58 +0000 Fri, 29 Oct 2021 21:05:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 30 Oct 2021 01:00:58 +0000 Fri, 29 Oct 2021 21:05:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 30 Oct 2021 01:00:58 +0000 Fri, 29 Oct 2021 21:05:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 30 Oct 2021 01:00:58 +0000 Fri, 29 Oct 2021 21:08:35 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 439913340Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518328Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 405424133473\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629496Ki\n pods: 110\nSystem Info:\n Machine ID: 5d3ed60c561e427db72df14bd9006ed0\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: 01b9d6bc-4126-4864-a1df-901a1bee4906\n Kernel Version: 3.10.0-1160.45.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.10\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system container-registry-65d7c44b96-zzkfl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h48m\n kube-system coredns-8474476ff8-lczbr 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 3h51m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 3h46m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 3h54m\n kube-system kube-flannel-d4pmt 150m (0%) 300m (0%) 64M (0%) 500M (0%) 3h52m\n kube-system kube-multus-ds-amd64-wgkfq 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 3h52m\n kube-system kube-proxy-z5k8p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h53m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 3h36m\n monitoring node-exporter-fv84w 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 3h39m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1012m (1%) 670m (0%)\n memory 431140Ki (0%) 1003316480 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Oct 30 01:01:01.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-121 describe namespace kubectl-121' Oct 30 01:01:01.412: INFO: stderr: "" Oct 30 01:01:01.412: INFO: stdout: "Name: kubectl-121\nLabels: e2e-framework=kubectl\n e2e-run=c3156c19-35f6-47fa-89c1-a8fd96bf129e\n kubernetes.io/metadata.name=kubectl-121\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:01.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-121" for this suite. • [SLOW TEST:6.543 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":19,"skipped":421,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:01.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Oct 30 01:01:01.521: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-737 d5786cfc-416c-4f6f-b09a-9687d3f15212 73009 0 2021-10-30 01:01:01 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-30 01:01:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:01:01.521: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-737 d5786cfc-416c-4f6f-b09a-9687d3f15212 73010 0 2021-10-30 01:01:01 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-30 01:01:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:01.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-737" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":20,"skipped":451,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:51.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:00:51.611: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:00:53.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152451, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152451, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152451, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152451, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:00:56.630: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:00:56.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8730-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:04.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9922" for this suite. STEP: Destroying namespace "webhook-9922-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.466 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":11,"skipped":206,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:00.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-f224bf53-0cd9-446e-80cc-f5b4b91d8862 STEP: Creating a pod to test consume secrets Oct 30 01:01:00.817: INFO: Waiting up to 5m0s for pod "pod-secrets-fc0970ad-58f0-4f38-af6c-71e6e67434b0" in namespace "secrets-1593" to be "Succeeded or Failed" Oct 30 01:01:00.819: INFO: Pod "pod-secrets-fc0970ad-58f0-4f38-af6c-71e6e67434b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133724ms Oct 30 01:01:02.821: INFO: Pod "pod-secrets-fc0970ad-58f0-4f38-af6c-71e6e67434b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004727271s Oct 30 01:01:04.824: INFO: Pod "pod-secrets-fc0970ad-58f0-4f38-af6c-71e6e67434b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007834813s STEP: Saw pod success Oct 30 01:01:04.825: INFO: Pod "pod-secrets-fc0970ad-58f0-4f38-af6c-71e6e67434b0" satisfied condition "Succeeded or Failed" Oct 30 01:01:04.828: INFO: Trying to get logs from node node2 pod pod-secrets-fc0970ad-58f0-4f38-af6c-71e6e67434b0 container secret-env-test: STEP: delete the pod Oct 30 01:01:04.839: INFO: Waiting for pod pod-secrets-fc0970ad-58f0-4f38-af6c-71e6e67434b0 to disappear Oct 30 01:01:04.841: INFO: Pod pod-secrets-fc0970ad-58f0-4f38-af6c-71e6e67434b0 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:04.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1593" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":435,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:01.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Oct 30 01:01:01.583: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 30 01:01:06.586: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:06.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3117" for this suite. • [SLOW TEST:5.045 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":21,"skipped":461,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:04.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:01:04.857: INFO: The status of Pod pod-secrets-d55ed7de-8c34-4186-9eff-5c31ea822af1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:01:06.860: INFO: The status of Pod pod-secrets-d55ed7de-8c34-4186-9eff-5c31ea822af1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:01:08.861: INFO: The status of Pod pod-secrets-d55ed7de-8c34-4186-9eff-5c31ea822af1 is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:08.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2820" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":12,"skipped":270,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:47.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-vxgq STEP: Creating a pod to test atomic-volume-subpath Oct 30 01:00:47.931: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-vxgq" in namespace "subpath-9004" to be "Succeeded or Failed" Oct 30 01:00:47.933: INFO: Pod "pod-subpath-test-downwardapi-vxgq": Phase="Pending", Reason="", readiness=false. Elapsed: 1.970724ms Oct 30 01:00:49.935: INFO: Pod "pod-subpath-test-downwardapi-vxgq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004760743s Oct 30 01:00:51.939: INFO: Pod "pod-subpath-test-downwardapi-vxgq": Phase="Running", Reason="", readiness=true. Elapsed: 4.008345073s Oct 30 01:00:53.942: INFO: Pod "pod-subpath-test-downwardapi-vxgq": Phase="Running", Reason="", readiness=true. Elapsed: 6.010929406s Oct 30 01:00:55.946: INFO: Pod "pod-subpath-test-downwardapi-vxgq": Phase="Running", Reason="", readiness=true. Elapsed: 8.015214826s Oct 30 01:00:57.951: INFO: Pod "pod-subpath-test-downwardapi-vxgq": Phase="Running", Reason="", readiness=true. Elapsed: 10.020636492s Oct 30 01:00:59.956: INFO: Pod "pod-subpath-test-downwardapi-vxgq": Phase="Running", Reason="", readiness=true. Elapsed: 12.025279281s Oct 30 01:01:01.960: INFO: Pod "pod-subpath-test-downwardapi-vxgq": Phase="Running", Reason="", readiness=true. Elapsed: 14.029592468s Oct 30 01:01:03.963: INFO: Pod "pod-subpath-test-downwardapi-vxgq": Phase="Running", Reason="", readiness=true. Elapsed: 16.03285138s Oct 30 01:01:05.967: INFO: Pod "pod-subpath-test-downwardapi-vxgq": Phase="Running", Reason="", readiness=true. Elapsed: 18.036545763s Oct 30 01:01:07.972: INFO: Pod "pod-subpath-test-downwardapi-vxgq": Phase="Running", Reason="", readiness=true. Elapsed: 20.041107631s Oct 30 01:01:09.976: INFO: Pod "pod-subpath-test-downwardapi-vxgq": Phase="Running", Reason="", readiness=true. Elapsed: 22.044861055s Oct 30 01:01:11.978: INFO: Pod "pod-subpath-test-downwardapi-vxgq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.047177459s STEP: Saw pod success Oct 30 01:01:11.978: INFO: Pod "pod-subpath-test-downwardapi-vxgq" satisfied condition "Succeeded or Failed" Oct 30 01:01:11.980: INFO: Trying to get logs from node node1 pod pod-subpath-test-downwardapi-vxgq container test-container-subpath-downwardapi-vxgq: STEP: delete the pod Oct 30 01:01:13.466: INFO: Waiting for pod pod-subpath-test-downwardapi-vxgq to disappear Oct 30 01:01:13.468: INFO: Pod pod-subpath-test-downwardapi-vxgq no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-vxgq Oct 30 01:01:13.468: INFO: Deleting pod "pod-subpath-test-downwardapi-vxgq" in namespace "subpath-9004" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:13.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9004" for this suite. • [SLOW TEST:25.587 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":18,"skipped":361,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:08.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:01:08.948: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5186aafe-96c5-41b4-85a6-98ce5aab473d", Controller:(*bool)(0xc004ec601a), BlockOwnerDeletion:(*bool)(0xc004ec601b)}} Oct 30 01:01:08.954: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e2be71f6-a5ad-4f29-b1b1-58460a4800d4", Controller:(*bool)(0xc00467ee2a), BlockOwnerDeletion:(*bool)(0xc00467ee2b)}} Oct 30 01:01:08.959: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"508ec4da-6a06-4194-a17b-b6ae99a180a1", Controller:(*bool)(0xc00475fb5a), BlockOwnerDeletion:(*bool)(0xc00475fb5b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:13.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8824" for this suite. • [SLOW TEST:5.099 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":13,"skipped":271,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:06.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 30 01:01:06.679: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:15.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7423" for this suite. • [SLOW TEST:8.439 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":22,"skipped":478,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:50.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Oct 30 01:00:50.743: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:00:59.259: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:17.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1580" for this suite. • [SLOW TEST:26.791 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":12,"skipped":299,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:04.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:17.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8522" for this suite. • [SLOW TEST:13.087 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":34,"skipped":439,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:14.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments Oct 30 01:01:14.035: INFO: Waiting up to 5m0s for pod "client-containers-ba4ada67-d7ce-41d2-a833-f03059b7767e" in namespace "containers-2812" to be "Succeeded or Failed" Oct 30 01:01:14.037: INFO: Pod "client-containers-ba4ada67-d7ce-41d2-a833-f03059b7767e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311417ms Oct 30 01:01:16.041: INFO: Pod "client-containers-ba4ada67-d7ce-41d2-a833-f03059b7767e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005582853s Oct 30 01:01:18.044: INFO: Pod "client-containers-ba4ada67-d7ce-41d2-a833-f03059b7767e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008935679s Oct 30 01:01:20.049: INFO: Pod "client-containers-ba4ada67-d7ce-41d2-a833-f03059b7767e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013565846s STEP: Saw pod success Oct 30 01:01:20.049: INFO: Pod "client-containers-ba4ada67-d7ce-41d2-a833-f03059b7767e" satisfied condition "Succeeded or Failed" Oct 30 01:01:20.051: INFO: Trying to get logs from node node2 pod client-containers-ba4ada67-d7ce-41d2-a833-f03059b7767e container agnhost-container: STEP: delete the pod Oct 30 01:01:20.064: INFO: Waiting for pod client-containers-ba4ada67-d7ce-41d2-a833-f03059b7767e to disappear Oct 30 01:01:20.066: INFO: Pod client-containers-ba4ada67-d7ce-41d2-a833-f03059b7767e no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:20.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2812" for this suite. • [SLOW TEST:6.068 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":279,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:17.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 30 01:01:17.986: INFO: Waiting up to 5m0s for pod "pod-969ee0b1-4d59-4257-87f3-8de5c01e81b7" in namespace "emptydir-5245" to be "Succeeded or Failed" Oct 30 01:01:17.989: INFO: Pod "pod-969ee0b1-4d59-4257-87f3-8de5c01e81b7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.303444ms Oct 30 01:01:19.992: INFO: Pod "pod-969ee0b1-4d59-4257-87f3-8de5c01e81b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006296648s Oct 30 01:01:21.996: INFO: Pod "pod-969ee0b1-4d59-4257-87f3-8de5c01e81b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010219894s STEP: Saw pod success Oct 30 01:01:21.996: INFO: Pod "pod-969ee0b1-4d59-4257-87f3-8de5c01e81b7" satisfied condition "Succeeded or Failed" Oct 30 01:01:21.999: INFO: Trying to get logs from node node1 pod pod-969ee0b1-4d59-4257-87f3-8de5c01e81b7 container test-container: STEP: delete the pod Oct 30 01:01:22.489: INFO: Waiting for pod pod-969ee0b1-4d59-4257-87f3-8de5c01e81b7 to disappear Oct 30 01:01:22.491: INFO: Pod pod-969ee0b1-4d59-4257-87f3-8de5c01e81b7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:22.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5245" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":443,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:15.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:01:15.125: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Oct 30 01:01:15.130: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 30 01:01:20.134: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 30 01:01:20.134: INFO: Creating deployment "test-rolling-update-deployment" Oct 30 01:01:20.137: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Oct 30 01:01:20.142: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Oct 30 01:01:22.148: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Oct 30 01:01:22.151: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152480, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152480, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152480, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152480, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:01:24.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152480, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152480, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152480, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152480, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:01:26.155: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 30 01:01:26.162: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-8623 315de9ba-a150-4e0f-8a18-e44bec21d5ac 73623 1 2021-10-30 01:01:20 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-10-30 01:01:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 01:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00559c658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-30 01:01:20 +0000 UTC,LastTransitionTime:2021-10-30 01:01:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2021-10-30 01:01:25 +0000 UTC,LastTransitionTime:2021-10-30 01:01:20 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 30 01:01:26.164: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-8623 552bb049-2e7b-4be9-a8f2-5df2e7cf9534 73612 1 2021-10-30 01:01:20 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 315de9ba-a150-4e0f-8a18-e44bec21d5ac 0xc00559cb07 0xc00559cb08}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"315de9ba-a150-4e0f-8a18-e44bec21d5ac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00559cb98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:01:26.165: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Oct 30 01:01:26.165: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-8623 48f3b966-2303-4f7e-9a87-33536155aa9d 73622 2 2021-10-30 01:01:15 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 315de9ba-a150-4e0f-8a18-e44bec21d5ac 0xc00559c9f7 0xc00559c9f8}] [] [{e2e.test Update apps/v1 2021-10-30 01:01:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 01:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"315de9ba-a150-4e0f-8a18-e44bec21d5ac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00559ca98 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:01:26.168: INFO: Pod "test-rolling-update-deployment-585b757574-x5mgw" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-x5mgw test-rolling-update-deployment-585b757574- deployment-8623 fdbd2d0a-96e1-400d-accb-acfb03bc22c9 73611 0 2021-10-30 01:01:20 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.48" ], "mac": "02:ac:79:c5:8a:e8", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.48" ], "mac": "02:ac:79:c5:8a:e8", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 552bb049-2e7b-4be9-a8f2-5df2e7cf9534 0xc00559cfaf 0xc00559cfc0}] [] [{kube-controller-manager Update v1 2021-10-30 01:01:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"552bb049-2e7b-4be9-a8f2-5df2e7cf9534\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:01:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:01:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.48\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9ztxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9ztxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:01:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:01:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:01:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:01:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.48,StartTime:2021-10-30 01:01:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:01:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://bd010adeb01791cb609f65f6df31f0160c40c5a476842daf9be53594bd85569d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:26.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8623" for this suite. • [SLOW TEST:11.076 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":23,"skipped":487,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:22.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 30 01:01:22.546: INFO: Waiting up to 5m0s for pod "downward-api-d85fc7ef-501c-4e41-9dc5-4beb010fe493" in namespace "downward-api-9388" to be "Succeeded or Failed" Oct 30 01:01:22.548: INFO: Pod "downward-api-d85fc7ef-501c-4e41-9dc5-4beb010fe493": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114057ms Oct 30 01:01:24.552: INFO: Pod "downward-api-d85fc7ef-501c-4e41-9dc5-4beb010fe493": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006196566s Oct 30 01:01:26.556: INFO: Pod "downward-api-d85fc7ef-501c-4e41-9dc5-4beb010fe493": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0101149s Oct 30 01:01:28.559: INFO: Pod "downward-api-d85fc7ef-501c-4e41-9dc5-4beb010fe493": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01373241s STEP: Saw pod success Oct 30 01:01:28.559: INFO: Pod "downward-api-d85fc7ef-501c-4e41-9dc5-4beb010fe493" satisfied condition "Succeeded or Failed" Oct 30 01:01:28.561: INFO: Trying to get logs from node node1 pod downward-api-d85fc7ef-501c-4e41-9dc5-4beb010fe493 container dapi-container: STEP: delete the pod Oct 30 01:01:28.573: INFO: Waiting for pod downward-api-d85fc7ef-501c-4e41-9dc5-4beb010fe493 to disappear Oct 30 01:01:28.575: INFO: Pod downward-api-d85fc7ef-501c-4e41-9dc5-4beb010fe493 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:28.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9388" for this suite. • [SLOW TEST:6.067 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":451,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:13.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Oct 30 01:01:13.529: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. Oct 30 01:01:13.971: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Oct 30 01:01:15.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:01:18.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:01:20.001: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:01:22.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:01:24.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152473, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:01:28.214: INFO: Waited 2.208145637s for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Oct 30 01:01:28.616: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:29.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7288" for this suite. • [SLOW TEST:16.001 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":19,"skipped":375,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:28.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:01:28.626: INFO: The status of Pod busybox-scheduling-4ce3333d-d258-4633-95d4-f30016f87247 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:01:30.630: INFO: The status of Pod busybox-scheduling-4ce3333d-d258-4633-95d4-f30016f87247 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:01:32.630: INFO: The status of Pod busybox-scheduling-4ce3333d-d258-4633-95d4-f30016f87247 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:01:34.632: INFO: The status of Pod busybox-scheduling-4ce3333d-d258-4633-95d4-f30016f87247 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:34.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1963" for this suite. • [SLOW TEST:6.054 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":456,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:56:35.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1030 00:56:35.229584 26 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:35.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-5942" for this suite. • [SLOW TEST:300.053 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":16,"skipped":392,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:34.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 30 01:01:34.696: INFO: Waiting up to 5m0s for pod "downward-api-32be1bf7-80bc-4523-a638-fa9ced6f5195" in namespace "downward-api-8712" to be "Succeeded or Failed" Oct 30 01:01:34.698: INFO: Pod "downward-api-32be1bf7-80bc-4523-a638-fa9ced6f5195": Phase="Pending", Reason="", readiness=false. Elapsed: 1.936241ms Oct 30 01:01:36.703: INFO: Pod "downward-api-32be1bf7-80bc-4523-a638-fa9ced6f5195": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006722072s Oct 30 01:01:38.708: INFO: Pod "downward-api-32be1bf7-80bc-4523-a638-fa9ced6f5195": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011364901s STEP: Saw pod success Oct 30 01:01:38.708: INFO: Pod "downward-api-32be1bf7-80bc-4523-a638-fa9ced6f5195" satisfied condition "Succeeded or Failed" Oct 30 01:01:38.710: INFO: Trying to get logs from node node2 pod downward-api-32be1bf7-80bc-4523-a638-fa9ced6f5195 container dapi-container: STEP: delete the pod Oct 30 01:01:38.724: INFO: Waiting for pod downward-api-32be1bf7-80bc-4523-a638-fa9ced6f5195 to disappear Oct 30 01:01:38.727: INFO: Pod downward-api-32be1bf7-80bc-4523-a638-fa9ced6f5195 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:38.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8712" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":464,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:35.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 30 01:01:35.306: INFO: Waiting up to 5m0s for pod "pod-3442ccde-6ec3-4dad-beac-da9ecd276f5a" in namespace "emptydir-4312" to be "Succeeded or Failed" Oct 30 01:01:35.308: INFO: Pod "pod-3442ccde-6ec3-4dad-beac-da9ecd276f5a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.960676ms Oct 30 01:01:37.312: INFO: Pod "pod-3442ccde-6ec3-4dad-beac-da9ecd276f5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005662934s Oct 30 01:01:39.315: INFO: Pod "pod-3442ccde-6ec3-4dad-beac-da9ecd276f5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009225731s STEP: Saw pod success Oct 30 01:01:39.315: INFO: Pod "pod-3442ccde-6ec3-4dad-beac-da9ecd276f5a" satisfied condition "Succeeded or Failed" Oct 30 01:01:39.318: INFO: Trying to get logs from node node1 pod pod-3442ccde-6ec3-4dad-beac-da9ecd276f5a container test-container: STEP: delete the pod Oct 30 01:01:39.341: INFO: Waiting for pod pod-3442ccde-6ec3-4dad-beac-da9ecd276f5a to disappear Oct 30 01:01:39.343: INFO: Pod pod-3442ccde-6ec3-4dad-beac-da9ecd276f5a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:39.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4312" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":398,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:26.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 30 01:01:26.218: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:01:28.221: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:01:30.221: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 30 01:01:30.238: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:01:32.241: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:01:34.241: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Oct 30 01:01:34.247: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 30 01:01:34.250: INFO: Pod pod-with-prestop-http-hook still exists Oct 30 01:01:36.251: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 30 01:01:36.254: INFO: Pod pod-with-prestop-http-hook still exists Oct 30 01:01:38.252: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 30 01:01:38.256: INFO: Pod pod-with-prestop-http-hook still exists Oct 30 01:01:40.252: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 30 01:01:40.255: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:40.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9774" for this suite. • [SLOW TEST:14.097 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":490,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:20.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:01:26.120: INFO: Deleting pod "var-expansion-13236781-d493-4dd6-8f9d-73f22b819a50" in namespace "var-expansion-6848" Oct 30 01:01:26.123: INFO: Wait up to 5m0s for pod "var-expansion-13236781-d493-4dd6-8f9d-73f22b819a50" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:42.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6848" for this suite. • [SLOW TEST:22.058 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":15,"skipped":281,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:39.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-d5d162bb-a10a-4de7-9bed-d5f31c295ed4 STEP: Creating a pod to test consume secrets Oct 30 01:01:39.507: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ec036cf3-b27d-4e08-9b3f-407d776f2be3" in namespace "projected-4986" to be "Succeeded or Failed" Oct 30 01:01:39.511: INFO: Pod "pod-projected-secrets-ec036cf3-b27d-4e08-9b3f-407d776f2be3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.357435ms Oct 30 01:01:41.514: INFO: Pod "pod-projected-secrets-ec036cf3-b27d-4e08-9b3f-407d776f2be3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006909772s Oct 30 01:01:43.518: INFO: Pod "pod-projected-secrets-ec036cf3-b27d-4e08-9b3f-407d776f2be3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010315693s Oct 30 01:01:45.527: INFO: Pod "pod-projected-secrets-ec036cf3-b27d-4e08-9b3f-407d776f2be3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019278273s STEP: Saw pod success Oct 30 01:01:45.527: INFO: Pod "pod-projected-secrets-ec036cf3-b27d-4e08-9b3f-407d776f2be3" satisfied condition "Succeeded or Failed" Oct 30 01:01:45.529: INFO: Trying to get logs from node node2 pod pod-projected-secrets-ec036cf3-b27d-4e08-9b3f-407d776f2be3 container projected-secret-volume-test: STEP: delete the pod Oct 30 01:01:45.546: INFO: Waiting for pod pod-projected-secrets-ec036cf3-b27d-4e08-9b3f-407d776f2be3 to disappear Oct 30 01:01:45.548: INFO: Pod pod-projected-secrets-ec036cf3-b27d-4e08-9b3f-407d776f2be3 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:45.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4986" for this suite. • [SLOW TEST:6.086 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":463,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:40.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:01:40.638: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:01:42.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152500, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152500, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152500, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152500, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:01:45.659: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Oct 30 01:01:45.672: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:45.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3700" for this suite. STEP: Destroying namespace "webhook-3700-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.429 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":25,"skipped":492,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:42.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3129.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3129.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 01:01:48.268: INFO: DNS probes using dns-3129/dns-test-9737802f-fc91-4442-9ed1-d04617c5e705 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:48.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3129" for this suite. • [SLOW TEST:6.089 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":16,"skipped":312,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:48.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token Oct 30 01:01:48.880: INFO: created pod pod-service-account-defaultsa Oct 30 01:01:48.880: INFO: pod pod-service-account-defaultsa service account token volume mount: true Oct 30 01:01:48.889: INFO: created pod pod-service-account-mountsa Oct 30 01:01:48.889: INFO: pod pod-service-account-mountsa service account token volume mount: true Oct 30 01:01:48.898: INFO: created pod pod-service-account-nomountsa Oct 30 01:01:48.898: INFO: pod pod-service-account-nomountsa service account token volume mount: false Oct 30 01:01:48.907: INFO: created pod pod-service-account-defaultsa-mountspec Oct 30 01:01:48.907: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Oct 30 01:01:48.916: INFO: created pod pod-service-account-mountsa-mountspec Oct 30 01:01:48.916: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Oct 30 01:01:48.926: INFO: created pod pod-service-account-nomountsa-mountspec Oct 30 01:01:48.926: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Oct 30 01:01:48.934: INFO: created pod pod-service-account-defaultsa-nomountspec Oct 30 01:01:48.934: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Oct 30 01:01:48.943: INFO: created pod pod-service-account-mountsa-nomountspec Oct 30 01:01:48.943: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Oct 30 01:01:48.951: INFO: created pod pod-service-account-nomountsa-nomountspec Oct 30 01:01:48.951: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:48.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8281" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":17,"skipped":342,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:45.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:01:45.606: INFO: Waiting up to 5m0s for pod "downwardapi-volume-72b6233e-28a0-4de5-8c3b-a8ede3efe01d" in namespace "downward-api-6949" to be "Succeeded or Failed" Oct 30 01:01:45.609: INFO: Pod "downwardapi-volume-72b6233e-28a0-4de5-8c3b-a8ede3efe01d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.494981ms Oct 30 01:01:47.613: INFO: Pod "downwardapi-volume-72b6233e-28a0-4de5-8c3b-a8ede3efe01d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006140533s Oct 30 01:01:49.618: INFO: Pod "downwardapi-volume-72b6233e-28a0-4de5-8c3b-a8ede3efe01d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011014781s STEP: Saw pod success Oct 30 01:01:49.618: INFO: Pod "downwardapi-volume-72b6233e-28a0-4de5-8c3b-a8ede3efe01d" satisfied condition "Succeeded or Failed" Oct 30 01:01:49.620: INFO: Trying to get logs from node node2 pod downwardapi-volume-72b6233e-28a0-4de5-8c3b-a8ede3efe01d container client-container: STEP: delete the pod Oct 30 01:01:49.633: INFO: Waiting for pod downwardapi-volume-72b6233e-28a0-4de5-8c3b-a8ede3efe01d to disappear Oct 30 01:01:49.635: INFO: Pod downwardapi-volume-72b6233e-28a0-4de5-8c3b-a8ede3efe01d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:01:49.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6949" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":472,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:01.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1030 01:00:01.944937 31 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:01.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7077" for this suite. • [SLOW TEST:120.044 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":20,"skipped":424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:48.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Oct 30 01:01:49.290: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:01:49.301: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:01:51.312: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152509, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152509, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152509, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152509, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:01:54.324: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:01:54.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:02.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9107" for this suite. STEP: Destroying namespace "webhook-9107-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.438 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":18,"skipped":358,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:02.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-b31f8d9d-cec0-4a97-8987-4e6ca05a60ce STEP: Creating a pod to test consume configMaps Oct 30 01:02:02.512: INFO: Waiting up to 5m0s for pod "pod-configmaps-51a76c05-6939-4829-8c78-ae2d08d7ba3b" in namespace "configmap-4949" to be "Succeeded or Failed" Oct 30 01:02:02.514: INFO: Pod "pod-configmaps-51a76c05-6939-4829-8c78-ae2d08d7ba3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.972458ms Oct 30 01:02:04.519: INFO: Pod "pod-configmaps-51a76c05-6939-4829-8c78-ae2d08d7ba3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007080633s Oct 30 01:02:06.523: INFO: Pod "pod-configmaps-51a76c05-6939-4829-8c78-ae2d08d7ba3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010810268s STEP: Saw pod success Oct 30 01:02:06.523: INFO: Pod "pod-configmaps-51a76c05-6939-4829-8c78-ae2d08d7ba3b" satisfied condition "Succeeded or Failed" Oct 30 01:02:06.525: INFO: Trying to get logs from node node2 pod pod-configmaps-51a76c05-6939-4829-8c78-ae2d08d7ba3b container agnhost-container: STEP: delete the pod Oct 30 01:02:06.536: INFO: Waiting for pod pod-configmaps-51a76c05-6939-4829-8c78-ae2d08d7ba3b to disappear Oct 30 01:02:06.538: INFO: Pod pod-configmaps-51a76c05-6939-4829-8c78-ae2d08d7ba3b no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:06.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4949" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":387,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:45.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:01:46.409: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:01:48.421: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152506, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152506, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152506, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152506, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:01:50.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152506, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152506, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152506, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152506, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:01:52.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152506, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152506, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152506, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771152506, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:01:55.433: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:06.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4973" for this suite. STEP: Destroying namespace "webhook-4973-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.832 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":26,"skipped":494,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:38.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-t49h STEP: Creating a pod to test atomic-volume-subpath Oct 30 01:01:38.785: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-t49h" in namespace "subpath-8058" to be "Succeeded or Failed" Oct 30 01:01:38.788: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261692ms Oct 30 01:01:40.791: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005387521s Oct 30 01:01:42.796: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010586494s Oct 30 01:01:44.799: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Running", Reason="", readiness=true. Elapsed: 6.013910099s Oct 30 01:01:46.802: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Running", Reason="", readiness=true. Elapsed: 8.017132189s Oct 30 01:01:48.805: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Running", Reason="", readiness=true. Elapsed: 10.019998227s Oct 30 01:01:50.809: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Running", Reason="", readiness=true. Elapsed: 12.02363858s Oct 30 01:01:52.813: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Running", Reason="", readiness=true. Elapsed: 14.027372699s Oct 30 01:01:54.817: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Running", Reason="", readiness=true. Elapsed: 16.031881057s Oct 30 01:01:56.821: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Running", Reason="", readiness=true. Elapsed: 18.035705216s Oct 30 01:01:58.825: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Running", Reason="", readiness=true. Elapsed: 20.039345992s Oct 30 01:02:00.828: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Running", Reason="", readiness=true. Elapsed: 22.042960558s Oct 30 01:02:02.831: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Running", Reason="", readiness=true. Elapsed: 24.045886212s Oct 30 01:02:04.834: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Running", Reason="", readiness=true. Elapsed: 26.049081748s Oct 30 01:02:06.837: INFO: Pod "pod-subpath-test-configmap-t49h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.051708001s STEP: Saw pod success Oct 30 01:02:06.837: INFO: Pod "pod-subpath-test-configmap-t49h" satisfied condition "Succeeded or Failed" Oct 30 01:02:06.840: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-t49h container test-container-subpath-configmap-t49h: STEP: delete the pod Oct 30 01:02:06.855: INFO: Waiting for pod pod-subpath-test-configmap-t49h to disappear Oct 30 01:02:06.857: INFO: Pod pod-subpath-test-configmap-t49h no longer exists STEP: Deleting pod pod-subpath-test-configmap-t49h Oct 30 01:02:06.857: INFO: Deleting pod "pod-subpath-test-configmap-t49h" in namespace "subpath-8058" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:06.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8058" for this suite. • [SLOW TEST:28.121 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:06.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-db36cf7b-b23d-4d20-a1f7-13c4e5b99763 STEP: Creating a pod to test consume secrets Oct 30 01:02:06.627: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e8cbcfed-6a1d-42bd-8fb1-6ba38b93d364" in namespace "projected-7246" to be "Succeeded or Failed" Oct 30 01:02:06.629: INFO: Pod "pod-projected-secrets-e8cbcfed-6a1d-42bd-8fb1-6ba38b93d364": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232775ms Oct 30 01:02:08.632: INFO: Pod "pod-projected-secrets-e8cbcfed-6a1d-42bd-8fb1-6ba38b93d364": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0054922s Oct 30 01:02:10.635: INFO: Pod "pod-projected-secrets-e8cbcfed-6a1d-42bd-8fb1-6ba38b93d364": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008197784s STEP: Saw pod success Oct 30 01:02:10.635: INFO: Pod "pod-projected-secrets-e8cbcfed-6a1d-42bd-8fb1-6ba38b93d364" satisfied condition "Succeeded or Failed" Oct 30 01:02:10.637: INFO: Trying to get logs from node node2 pod pod-projected-secrets-e8cbcfed-6a1d-42bd-8fb1-6ba38b93d364 container secret-volume-test: STEP: delete the pod Oct 30 01:02:10.665: INFO: Waiting for pod pod-projected-secrets-e8cbcfed-6a1d-42bd-8fb1-6ba38b93d364 to disappear Oct 30 01:02:10.667: INFO: Pod pod-projected-secrets-e8cbcfed-6a1d-42bd-8fb1-6ba38b93d364 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:10.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7246" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":511,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:49.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7418 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7418 STEP: creating replication controller externalsvc in namespace services-7418 I1030 01:01:49.703794 26 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7418, replica count: 2 I1030 01:01:52.755068 26 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:01:55.755789 26 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:01:58.756671 26 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Oct 30 01:01:58.771: INFO: Creating new exec pod Oct 30 01:02:02.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7418 exec execpodcqqgk -- /bin/sh -x -c nslookup nodeport-service.services-7418.svc.cluster.local' Oct 30 01:02:03.034: INFO: stderr: "+ nslookup nodeport-service.services-7418.svc.cluster.local\n" Oct 30 01:02:03.034: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-7418.svc.cluster.local\tcanonical name = externalsvc.services-7418.svc.cluster.local.\nName:\texternalsvc.services-7418.svc.cluster.local\nAddress: 10.233.8.138\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7418, will wait for the garbage collector to delete the pods Oct 30 01:02:03.093: INFO: Deleting ReplicationController externalsvc took: 4.596639ms Oct 30 01:02:03.193: INFO: Terminating ReplicationController externalsvc pods took: 100.858374ms Oct 30 01:02:13.104: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:13.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7418" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:23.452 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":20,"skipped":482,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:13.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 30 01:02:13.165: INFO: The status of Pod labelsupdate31b6eca5-0e78-469f-b6e7-7f3d8494a226 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:02:15.169: INFO: The status of Pod labelsupdate31b6eca5-0e78-469f-b6e7-7f3d8494a226 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:02:17.168: INFO: The status of Pod labelsupdate31b6eca5-0e78-469f-b6e7-7f3d8494a226 is Running (Ready = true) Oct 30 01:02:17.689: INFO: Successfully updated pod "labelsupdate31b6eca5-0e78-469f-b6e7-7f3d8494a226" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:19.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7965" for this suite. • [SLOW TEST:6.584 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":39,"skipped":468,"failed":0} [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:06.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-9900 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9900 STEP: Deleting pre-stop pod Oct 30 01:02:19.935: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:19.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9900" for this suite. • [SLOW TEST:13.082 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":40,"skipped":468,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:02.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Oct 30 01:02:02.052: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:26.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1905" for this suite. • [SLOW TEST:24.001 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":21,"skipped":462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:26.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:30.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7021" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":22,"skipped":487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:30.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1836 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1836;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1836 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1836;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1836.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1836.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1836.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1836.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1836.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1836.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1836.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1836.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1836.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1836.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1836.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1836.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1836.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 149.50.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.50.149_udp@PTR;check="$$(dig +tcp +noall +answer +search 149.50.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.50.149_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1836 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1836;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1836 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1836;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1836.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1836.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1836.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1836.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1836.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1836.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1836.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1836.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1836.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1836.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1836.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1836.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1836.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 149.50.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.50.149_udp@PTR;check="$$(dig +tcp +noall +answer +search 149.50.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.50.149_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 01:02:36.265: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.269: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.272: INFO: Unable to read wheezy_udp@dns-test-service.dns-1836 from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.274: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1836 from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.277: INFO: Unable to read wheezy_udp@dns-test-service.dns-1836.svc from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.280: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1836.svc from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.282: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1836.svc from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.285: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1836.svc from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.306: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.308: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.311: INFO: Unable to read jessie_udp@dns-test-service.dns-1836 from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.314: INFO: Unable to read jessie_tcp@dns-test-service.dns-1836 from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.316: INFO: Unable to read jessie_udp@dns-test-service.dns-1836.svc from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.319: INFO: Unable to read jessie_tcp@dns-test-service.dns-1836.svc from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.321: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1836.svc from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.324: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1836.svc from pod dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b: the server could not find the requested resource (get pods dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b) Oct 30 01:02:36.341: INFO: Lookups using dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1836 wheezy_tcp@dns-test-service.dns-1836 wheezy_udp@dns-test-service.dns-1836.svc wheezy_tcp@dns-test-service.dns-1836.svc wheezy_udp@_http._tcp.dns-test-service.dns-1836.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1836.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1836 jessie_tcp@dns-test-service.dns-1836 jessie_udp@dns-test-service.dns-1836.svc jessie_tcp@dns-test-service.dns-1836.svc jessie_udp@_http._tcp.dns-test-service.dns-1836.svc jessie_tcp@_http._tcp.dns-test-service.dns-1836.svc] Oct 30 01:02:41.419: INFO: DNS probes using dns-1836/dns-test-a0ef1faa-1ec2-4fe8-b29a-ea8845514f8b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:41.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1836" for this suite. • [SLOW TEST:11.240 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":23,"skipped":533,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:41.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-0dee9e2b-215e-4f5b-bcf0-880d1cece6a7 STEP: Creating a pod to test consume configMaps Oct 30 01:02:41.492: INFO: Waiting up to 5m0s for pod "pod-configmaps-cb6d0cb2-71d0-4f3a-b11b-4e47a319a389" in namespace "configmap-3629" to be "Succeeded or Failed" Oct 30 01:02:41.495: INFO: Pod "pod-configmaps-cb6d0cb2-71d0-4f3a-b11b-4e47a319a389": Phase="Pending", Reason="", readiness=false. Elapsed: 3.533913ms Oct 30 01:02:43.500: INFO: Pod "pod-configmaps-cb6d0cb2-71d0-4f3a-b11b-4e47a319a389": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008326398s Oct 30 01:02:45.504: INFO: Pod "pod-configmaps-cb6d0cb2-71d0-4f3a-b11b-4e47a319a389": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012882412s Oct 30 01:02:47.509: INFO: Pod "pod-configmaps-cb6d0cb2-71d0-4f3a-b11b-4e47a319a389": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016976746s STEP: Saw pod success Oct 30 01:02:47.509: INFO: Pod "pod-configmaps-cb6d0cb2-71d0-4f3a-b11b-4e47a319a389" satisfied condition "Succeeded or Failed" Oct 30 01:02:47.511: INFO: Trying to get logs from node node2 pod pod-configmaps-cb6d0cb2-71d0-4f3a-b11b-4e47a319a389 container configmap-volume-test: STEP: delete the pod Oct 30 01:02:47.524: INFO: Waiting for pod pod-configmaps-cb6d0cb2-71d0-4f3a-b11b-4e47a319a389 to disappear Oct 30 01:02:47.526: INFO: Pod pod-configmaps-cb6d0cb2-71d0-4f3a-b11b-4e47a319a389 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:47.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3629" for this suite. • [SLOW TEST:6.079 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:47.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events Oct 30 01:02:47.705: INFO: created test-event-1 Oct 30 01:02:47.708: INFO: created test-event-2 Oct 30 01:02:47.710: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Oct 30 01:02:47.712: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Oct 30 01:02:47.725: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:47.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7351" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":25,"skipped":597,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:19.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:52.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1606" for this suite. • [SLOW TEST:32.253 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:47.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint Oct 30 01:02:47.788: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 STEP: mirroring an update to a custom Endpoint Oct 30 01:02:49.798: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 STEP: mirroring deletion of a custom Endpoint Oct 30 01:02:51.808: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:53.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-1281" for this suite. • [SLOW TEST:6.062 seconds] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":26,"skipped":609,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:53.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:53.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5881" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":27,"skipped":612,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:19.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Oct 30 01:02:19.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 create -f -' Oct 30 01:02:20.361: INFO: stderr: "" Oct 30 01:02:20.361: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 30 01:02:20.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:02:20.532: INFO: stderr: "" Oct 30 01:02:20.532: INFO: stdout: "update-demo-nautilus-lln9f update-demo-nautilus-rztnf " Oct 30 01:02:20.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods update-demo-nautilus-lln9f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:02:20.703: INFO: stderr: "" Oct 30 01:02:20.703: INFO: stdout: "" Oct 30 01:02:20.703: INFO: update-demo-nautilus-lln9f is created but not running Oct 30 01:02:25.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:02:25.864: INFO: stderr: "" Oct 30 01:02:25.864: INFO: stdout: "update-demo-nautilus-lln9f update-demo-nautilus-rztnf " Oct 30 01:02:25.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods update-demo-nautilus-lln9f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:02:26.008: INFO: stderr: "" Oct 30 01:02:26.008: INFO: stdout: "" Oct 30 01:02:26.008: INFO: update-demo-nautilus-lln9f is created but not running Oct 30 01:02:31.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:02:31.178: INFO: stderr: "" Oct 30 01:02:31.178: INFO: stdout: "update-demo-nautilus-lln9f update-demo-nautilus-rztnf " Oct 30 01:02:31.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods update-demo-nautilus-lln9f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:02:31.331: INFO: stderr: "" Oct 30 01:02:31.331: INFO: stdout: "true" Oct 30 01:02:31.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods update-demo-nautilus-lln9f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 30 01:02:31.481: INFO: stderr: "" Oct 30 01:02:31.481: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 30 01:02:31.481: INFO: validating pod update-demo-nautilus-lln9f Oct 30 01:02:31.486: INFO: got data: { "image": "nautilus.jpg" } Oct 30 01:02:31.487: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 30 01:02:31.487: INFO: update-demo-nautilus-lln9f is verified up and running Oct 30 01:02:31.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods update-demo-nautilus-rztnf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:02:31.647: INFO: stderr: "" Oct 30 01:02:31.647: INFO: stdout: "true" Oct 30 01:02:31.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods update-demo-nautilus-rztnf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 30 01:02:31.807: INFO: stderr: "" Oct 30 01:02:31.807: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 30 01:02:31.807: INFO: validating pod update-demo-nautilus-rztnf Oct 30 01:02:31.811: INFO: got data: { "image": "nautilus.jpg" } Oct 30 01:02:31.811: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 30 01:02:31.811: INFO: update-demo-nautilus-rztnf is verified up and running STEP: scaling down the replication controller Oct 30 01:02:31.820: INFO: scanned /root for discovery docs: Oct 30 01:02:31.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Oct 30 01:02:32.033: INFO: stderr: "" Oct 30 01:02:32.033: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 30 01:02:32.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:02:32.211: INFO: stderr: "" Oct 30 01:02:32.211: INFO: stdout: "update-demo-nautilus-lln9f update-demo-nautilus-rztnf " STEP: Replicas for name=update-demo: expected=1 actual=2 Oct 30 01:02:37.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:02:37.393: INFO: stderr: "" Oct 30 01:02:37.393: INFO: stdout: "update-demo-nautilus-lln9f update-demo-nautilus-rztnf " STEP: Replicas for name=update-demo: expected=1 actual=2 Oct 30 01:02:42.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:02:42.567: INFO: stderr: "" Oct 30 01:02:42.567: INFO: stdout: "update-demo-nautilus-lln9f update-demo-nautilus-rztnf " STEP: Replicas for name=update-demo: expected=1 actual=2 Oct 30 01:02:47.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:02:47.721: INFO: stderr: "" Oct 30 01:02:47.721: INFO: stdout: "update-demo-nautilus-lln9f " Oct 30 01:02:47.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods update-demo-nautilus-lln9f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:02:47.906: INFO: stderr: "" Oct 30 01:02:47.906: INFO: stdout: "true" Oct 30 01:02:47.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods update-demo-nautilus-lln9f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 30 01:02:48.072: INFO: stderr: "" Oct 30 01:02:48.072: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 30 01:02:48.072: INFO: validating pod update-demo-nautilus-lln9f Oct 30 01:02:48.076: INFO: got data: { "image": "nautilus.jpg" } Oct 30 01:02:48.076: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 30 01:02:48.076: INFO: update-demo-nautilus-lln9f is verified up and running STEP: scaling up the replication controller Oct 30 01:02:48.085: INFO: scanned /root for discovery docs: Oct 30 01:02:48.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Oct 30 01:02:48.303: INFO: stderr: "" Oct 30 01:02:48.304: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 30 01:02:48.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:02:48.470: INFO: stderr: "" Oct 30 01:02:48.470: INFO: stdout: "update-demo-nautilus-jzxwl update-demo-nautilus-lln9f " Oct 30 01:02:48.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods update-demo-nautilus-jzxwl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:02:48.637: INFO: stderr: "" Oct 30 01:02:48.637: INFO: stdout: "" Oct 30 01:02:48.637: INFO: update-demo-nautilus-jzxwl is created but not running Oct 30 01:02:53.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:02:53.822: INFO: stderr: "" Oct 30 01:02:53.822: INFO: stdout: "update-demo-nautilus-jzxwl update-demo-nautilus-lln9f " Oct 30 01:02:53.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods update-demo-nautilus-jzxwl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:02:53.979: INFO: stderr: "" Oct 30 01:02:53.979: INFO: stdout: "true" Oct 30 01:02:53.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods update-demo-nautilus-jzxwl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 30 01:02:54.136: INFO: stderr: "" Oct 30 01:02:54.136: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 30 01:02:54.136: INFO: validating pod update-demo-nautilus-jzxwl Oct 30 01:02:54.139: INFO: got data: { "image": "nautilus.jpg" } Oct 30 01:02:54.139: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 30 01:02:54.139: INFO: update-demo-nautilus-jzxwl is verified up and running Oct 30 01:02:54.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods update-demo-nautilus-lln9f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:02:54.305: INFO: stderr: "" Oct 30 01:02:54.305: INFO: stdout: "true" Oct 30 01:02:54.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods update-demo-nautilus-lln9f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 30 01:02:54.469: INFO: stderr: "" Oct 30 01:02:54.469: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 30 01:02:54.469: INFO: validating pod update-demo-nautilus-lln9f Oct 30 01:02:54.472: INFO: got data: { "image": "nautilus.jpg" } Oct 30 01:02:54.472: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 30 01:02:54.472: INFO: update-demo-nautilus-lln9f is verified up and running STEP: using delete to clean up resources Oct 30 01:02:54.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 delete --grace-period=0 --force -f -' Oct 30 01:02:54.606: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 01:02:54.606: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 30 01:02:54.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get rc,svc -l name=update-demo --no-headers' Oct 30 01:02:54.812: INFO: stderr: "No resources found in kubectl-3439 namespace.\n" Oct 30 01:02:54.812: INFO: stdout: "" Oct 30 01:02:54.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3439 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 30 01:02:54.985: INFO: stderr: "" Oct 30 01:02:54.985: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:54.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3439" for this suite. • [SLOW TEST:35.039 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":41,"skipped":469,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:52.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 30 01:02:52.133: INFO: Waiting up to 5m0s for pod "pod-ed6153d1-f902-4fc2-a7f1-7cc09c735cfb" in namespace "emptydir-7127" to be "Succeeded or Failed" Oct 30 01:02:52.135: INFO: Pod "pod-ed6153d1-f902-4fc2-a7f1-7cc09c735cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 1.942213ms Oct 30 01:02:54.139: INFO: Pod "pod-ed6153d1-f902-4fc2-a7f1-7cc09c735cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005817841s Oct 30 01:02:56.144: INFO: Pod "pod-ed6153d1-f902-4fc2-a7f1-7cc09c735cfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010649317s STEP: Saw pod success Oct 30 01:02:56.144: INFO: Pod "pod-ed6153d1-f902-4fc2-a7f1-7cc09c735cfb" satisfied condition "Succeeded or Failed" Oct 30 01:02:56.146: INFO: Trying to get logs from node node1 pod pod-ed6153d1-f902-4fc2-a7f1-7cc09c735cfb container test-container: STEP: delete the pod Oct 30 01:02:56.158: INFO: Waiting for pod pod-ed6153d1-f902-4fc2-a7f1-7cc09c735cfb to disappear Oct 30 01:02:56.160: INFO: Pod pod-ed6153d1-f902-4fc2-a7f1-7cc09c735cfb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:02:56.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7127" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":562,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:56.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:00.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-6526" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":24,"skipped":578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:00.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 30 01:03:00.387: INFO: Waiting up to 5m0s for pod "downward-api-dfd91cf5-91bb-44c9-aa4b-e00f7c544b02" in namespace "downward-api-6632" to be "Succeeded or Failed" Oct 30 01:03:00.390: INFO: Pod "downward-api-dfd91cf5-91bb-44c9-aa4b-e00f7c544b02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.818601ms Oct 30 01:03:02.394: INFO: Pod "downward-api-dfd91cf5-91bb-44c9-aa4b-e00f7c544b02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007522689s Oct 30 01:03:04.399: INFO: Pod "downward-api-dfd91cf5-91bb-44c9-aa4b-e00f7c544b02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011763887s STEP: Saw pod success Oct 30 01:03:04.399: INFO: Pod "downward-api-dfd91cf5-91bb-44c9-aa4b-e00f7c544b02" satisfied condition "Succeeded or Failed" Oct 30 01:03:04.401: INFO: Trying to get logs from node node2 pod downward-api-dfd91cf5-91bb-44c9-aa4b-e00f7c544b02 container dapi-container: STEP: delete the pod Oct 30 01:03:04.416: INFO: Waiting for pod downward-api-dfd91cf5-91bb-44c9-aa4b-e00f7c544b02 to disappear Oct 30 01:03:04.418: INFO: Pod downward-api-dfd91cf5-91bb-44c9-aa4b-e00f7c544b02 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:04.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6632" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":620,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:04.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:03:04.499: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Oct 30 01:03:04.513: INFO: The status of Pod pod-exec-websocket-b30516f4-e3fd-4f3f-a211-65678e1f4a0d is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:03:06.519: INFO: The status of Pod pod-exec-websocket-b30516f4-e3fd-4f3f-a211-65678e1f4a0d is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:03:08.518: INFO: The status of Pod pod-exec-websocket-b30516f4-e3fd-4f3f-a211-65678e1f4a0d is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:08.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3004" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":639,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:08.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods Oct 30 01:03:08.642: INFO: created test-pod-1 Oct 30 01:03:08.653: INFO: created test-pod-2 Oct 30 01:03:08.664: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:08.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4173" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":27,"skipped":640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:08.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:03:08.822: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6730aa3e-90d7-40fe-8b30-89e77f914a0c" in namespace "downward-api-2416" to be "Succeeded or Failed" Oct 30 01:03:08.826: INFO: Pod "downwardapi-volume-6730aa3e-90d7-40fe-8b30-89e77f914a0c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.335933ms Oct 30 01:03:10.831: INFO: Pod "downwardapi-volume-6730aa3e-90d7-40fe-8b30-89e77f914a0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00817112s Oct 30 01:03:12.834: INFO: Pod "downwardapi-volume-6730aa3e-90d7-40fe-8b30-89e77f914a0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012047s STEP: Saw pod success Oct 30 01:03:12.835: INFO: Pod "downwardapi-volume-6730aa3e-90d7-40fe-8b30-89e77f914a0c" satisfied condition "Succeeded or Failed" Oct 30 01:03:12.838: INFO: Trying to get logs from node node2 pod downwardapi-volume-6730aa3e-90d7-40fe-8b30-89e77f914a0c container client-container: STEP: delete the pod Oct 30 01:03:12.849: INFO: Waiting for pod downwardapi-volume-6730aa3e-90d7-40fe-8b30-89e77f914a0c to disappear Oct 30 01:03:12.851: INFO: Pod downwardapi-volume-6730aa3e-90d7-40fe-8b30-89e77f914a0c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:12.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2416" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":684,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:06.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:02:06.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Oct 30 01:02:14.152: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-30T01:02:14Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-30T01:02:14Z]] name:name1 resourceVersion:74883 uid:7adba1bf-701d-4f02-912b-d989908a08e7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Oct 30 01:02:24.156: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-30T01:02:24Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-30T01:02:24Z]] name:name2 resourceVersion:75051 uid:d97367a2-0d3d-456e-b3d4-6787962cc653] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Oct 30 01:02:34.161: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-30T01:02:14Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-30T01:02:34Z]] name:name1 resourceVersion:75210 uid:7adba1bf-701d-4f02-912b-d989908a08e7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Oct 30 01:02:44.167: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-30T01:02:24Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-30T01:02:44Z]] name:name2 resourceVersion:75322 uid:d97367a2-0d3d-456e-b3d4-6787962cc653] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Oct 30 01:02:54.172: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-30T01:02:14Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-30T01:02:34Z]] name:name1 resourceVersion:75506 uid:7adba1bf-701d-4f02-912b-d989908a08e7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Oct 30 01:03:04.180: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-30T01:02:24Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-30T01:02:44Z]] name:name2 resourceVersion:75721 uid:d97367a2-0d3d-456e-b3d4-6787962cc653] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:14.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5658" for this suite. • [SLOW TEST:68.112 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":20,"skipped":407,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:53.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-7z5j STEP: Creating a pod to test atomic-volume-subpath Oct 30 01:02:53.937: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7z5j" in namespace "subpath-6697" to be "Succeeded or Failed" Oct 30 01:02:53.939: INFO: Pod "pod-subpath-test-configmap-7z5j": Phase="Pending", Reason="", readiness=false. Elapsed: 1.902583ms Oct 30 01:02:55.943: INFO: Pod "pod-subpath-test-configmap-7z5j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005190806s Oct 30 01:02:57.947: INFO: Pod "pod-subpath-test-configmap-7z5j": Phase="Running", Reason="", readiness=true. Elapsed: 4.009614565s Oct 30 01:02:59.952: INFO: Pod "pod-subpath-test-configmap-7z5j": Phase="Running", Reason="", readiness=true. Elapsed: 6.014260518s Oct 30 01:03:01.955: INFO: Pod "pod-subpath-test-configmap-7z5j": Phase="Running", Reason="", readiness=true. Elapsed: 8.017500868s Oct 30 01:03:03.958: INFO: Pod "pod-subpath-test-configmap-7z5j": Phase="Running", Reason="", readiness=true. Elapsed: 10.020469913s Oct 30 01:03:05.963: INFO: Pod "pod-subpath-test-configmap-7z5j": Phase="Running", Reason="", readiness=true. Elapsed: 12.025015347s Oct 30 01:03:07.966: INFO: Pod "pod-subpath-test-configmap-7z5j": Phase="Running", Reason="", readiness=true. Elapsed: 14.028419916s Oct 30 01:03:09.970: INFO: Pod "pod-subpath-test-configmap-7z5j": Phase="Running", Reason="", readiness=true. Elapsed: 16.032695747s Oct 30 01:03:11.974: INFO: Pod "pod-subpath-test-configmap-7z5j": Phase="Running", Reason="", readiness=true. Elapsed: 18.036109707s Oct 30 01:03:13.977: INFO: Pod "pod-subpath-test-configmap-7z5j": Phase="Running", Reason="", readiness=true. Elapsed: 20.039153207s Oct 30 01:03:15.981: INFO: Pod "pod-subpath-test-configmap-7z5j": Phase="Running", Reason="", readiness=true. Elapsed: 22.043260723s Oct 30 01:03:17.983: INFO: Pod "pod-subpath-test-configmap-7z5j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.045958696s STEP: Saw pod success Oct 30 01:03:17.984: INFO: Pod "pod-subpath-test-configmap-7z5j" satisfied condition "Succeeded or Failed" Oct 30 01:03:17.986: INFO: Trying to get logs from node node1 pod pod-subpath-test-configmap-7z5j container test-container-subpath-configmap-7z5j: STEP: delete the pod Oct 30 01:03:18.004: INFO: Waiting for pod pod-subpath-test-configmap-7z5j to disappear Oct 30 01:03:18.006: INFO: Pod pod-subpath-test-configmap-7z5j no longer exists STEP: Deleting pod pod-subpath-test-configmap-7z5j Oct 30 01:03:18.006: INFO: Deleting pod "pod-subpath-test-configmap-7z5j" in namespace "subpath-6697" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:18.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6697" for this suite. • [SLOW TEST:24.117 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:14.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Oct 30 01:03:14.746: INFO: observed Pod pod-test in namespace pods-1243 in phase Pending with labels: map[test-pod-static:true] & conditions [] Oct 30 01:03:14.748: INFO: observed Pod pod-test in namespace pods-1243 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:03:14 +0000 UTC }] Oct 30 01:03:14.756: INFO: observed Pod pod-test in namespace pods-1243 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:03:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:03:14 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:03:14 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:03:14 +0000 UTC }] Oct 30 01:03:16.653: INFO: observed Pod pod-test in namespace pods-1243 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:03:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:03:14 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:03:14 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:03:14 +0000 UTC }] Oct 30 01:03:18.343: INFO: Found Pod pod-test in namespace pods-1243 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:03:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:03:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:03:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:03:14 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Oct 30 01:03:18.355: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Oct 30 01:03:18.374: INFO: observed event type ADDED Oct 30 01:03:18.374: INFO: observed event type MODIFIED Oct 30 01:03:18.374: INFO: observed event type MODIFIED Oct 30 01:03:18.374: INFO: observed event type MODIFIED Oct 30 01:03:18.375: INFO: observed event type MODIFIED Oct 30 01:03:18.375: INFO: observed event type MODIFIED Oct 30 01:03:18.375: INFO: observed event type MODIFIED Oct 30 01:03:18.375: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:18.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1243" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":21,"skipped":408,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:12.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Oct 30 01:03:12.894: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Oct 30 01:03:12.898: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 30 01:03:12.899: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Oct 30 01:03:12.913: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 30 01:03:12.913: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Oct 30 01:03:12.931: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Oct 30 01:03:12.931: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Oct 30 01:03:19.976: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:19.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-8094" for this suite. • [SLOW TEST:7.125 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":29,"skipped":687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":28,"skipped":635,"failed":0} [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:18.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-df879ac9-3b20-484a-af5c-06d31ae92cdc STEP: Creating a pod to test consume secrets Oct 30 01:03:18.051: INFO: Waiting up to 5m0s for pod "pod-secrets-1963c208-7601-45ce-8c4d-103a6fa5b3e3" in namespace "secrets-8953" to be "Succeeded or Failed" Oct 30 01:03:18.054: INFO: Pod "pod-secrets-1963c208-7601-45ce-8c4d-103a6fa5b3e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091305ms Oct 30 01:03:20.057: INFO: Pod "pod-secrets-1963c208-7601-45ce-8c4d-103a6fa5b3e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005955803s Oct 30 01:03:22.061: INFO: Pod "pod-secrets-1963c208-7601-45ce-8c4d-103a6fa5b3e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009289447s STEP: Saw pod success Oct 30 01:03:22.061: INFO: Pod "pod-secrets-1963c208-7601-45ce-8c4d-103a6fa5b3e3" satisfied condition "Succeeded or Failed" Oct 30 01:03:22.064: INFO: Trying to get logs from node node1 pod pod-secrets-1963c208-7601-45ce-8c4d-103a6fa5b3e3 container secret-volume-test: STEP: delete the pod Oct 30 01:03:22.076: INFO: Waiting for pod pod-secrets-1963c208-7601-45ce-8c4d-103a6fa5b3e3 to disappear Oct 30 01:03:22.079: INFO: Pod pod-secrets-1963c208-7601-45ce-8c4d-103a6fa5b3e3 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:22.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8953" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":635,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:22.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server Oct 30 01:03:22.130: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8265 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:22.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8265" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":30,"skipped":647,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:20.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:24.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-9386" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":30,"skipped":720,"failed":0} SS ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":32,"skipped":458,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 00:57:19.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5797 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-5797 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5797 Oct 30 00:57:19.637: INFO: Found 0 stateful pods, waiting for 1 Oct 30 00:57:29.641: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Oct 30 00:57:29.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 00:57:29.874: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 00:57:29.874: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 00:57:29.874: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 00:57:29.877: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 30 00:57:39.884: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 30 00:57:39.884: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 00:57:39.895: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 00:57:39.895: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC }] Oct 30 00:57:39.895: INFO: Oct 30 00:57:39.895: INFO: StatefulSet ss has not reached scale 3, at 1 Oct 30 00:57:40.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997097224s Oct 30 00:57:41.903: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993217091s Oct 30 00:57:42.907: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988787534s Oct 30 00:57:43.910: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.985214587s Oct 30 00:57:44.914: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.981220203s Oct 30 00:57:45.917: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.9782567s Oct 30 00:57:46.920: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.975184433s Oct 30 00:57:47.924: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.971420301s Oct 30 00:57:48.928: INFO: Verifying statefulset ss doesn't scale past 3 for another 967.972155ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5797 Oct 30 00:57:49.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:57:50.205: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 30 00:57:50.205: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 00:57:50.205: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 00:57:50.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:57:50.447: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Oct 30 00:57:50.447: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 00:57:50.447: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 00:57:50.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:57:50.699: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Oct 30 00:57:50.699: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 00:57:50.699: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 00:57:50.702: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Oct 30 00:58:00.705: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 30 00:58:00.705: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 30 00:58:00.705: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Oct 30 00:58:00.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 00:58:01.022: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 00:58:01.023: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 00:58:01.023: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 00:58:01.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 00:58:01.321: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 00:58:01.321: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 00:58:01.321: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 00:58:01.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 00:58:02.087: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 00:58:02.087: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 00:58:02.087: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 00:58:02.087: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 00:58:02.090: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Oct 30 00:58:12.097: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 30 00:58:12.097: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 30 00:58:12.097: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 30 00:58:12.107: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 00:58:12.107: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC }] Oct 30 00:58:12.107: INFO: ss-1 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:12.107: INFO: ss-2 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:12.107: INFO: Oct 30 00:58:12.107: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 30 00:58:13.111: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 00:58:13.111: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC }] Oct 30 00:58:13.111: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:13.111: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:13.111: INFO: Oct 30 00:58:13.111: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 30 00:58:14.114: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 00:58:14.115: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC }] Oct 30 00:58:14.115: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:14.115: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:14.115: INFO: Oct 30 00:58:14.115: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 30 00:58:15.119: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 00:58:15.119: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC }] Oct 30 00:58:15.119: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:15.119: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:15.119: INFO: Oct 30 00:58:15.119: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 30 00:58:16.123: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 00:58:16.123: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC }] Oct 30 00:58:16.123: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:16.123: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:16.123: INFO: Oct 30 00:58:16.123: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 30 00:58:17.128: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 00:58:17.128: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC }] Oct 30 00:58:17.128: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:17.128: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:17.128: INFO: Oct 30 00:58:17.128: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 30 00:58:18.131: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 00:58:18.131: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:19 +0000 UTC }] Oct 30 00:58:18.132: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:18.132: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:18.132: INFO: Oct 30 00:58:18.132: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 30 00:58:19.137: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 00:58:19.137: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:19.137: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:19.137: INFO: Oct 30 00:58:19.137: INFO: StatefulSet ss has not reached scale 0, at 2 Oct 30 00:58:20.140: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 00:58:20.140: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:20.140: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:20.140: INFO: Oct 30 00:58:20.140: INFO: StatefulSet ss has not reached scale 0, at 2 Oct 30 00:58:21.143: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 00:58:21.143: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:21.143: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:58:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 00:57:39 +0000 UTC }] Oct 30 00:58:21.143: INFO: Oct 30 00:58:21.143: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5797 Oct 30 00:58:22.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:58:22.427: INFO: rc: 1 Oct 30 00:58:22.427: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Oct 30 00:58:32.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:58:32.584: INFO: rc: 1 Oct 30 00:58:32.584: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 00:58:42.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:58:42.714: INFO: rc: 1 Oct 30 00:58:42.714: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 00:58:52.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:58:52.869: INFO: rc: 1 Oct 30 00:58:52.869: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 00:59:02.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:59:03.003: INFO: rc: 1 Oct 30 00:59:03.003: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 00:59:13.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:59:13.137: INFO: rc: 1 Oct 30 00:59:13.137: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 00:59:23.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:59:23.289: INFO: rc: 1 Oct 30 00:59:23.289: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 00:59:33.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:59:33.414: INFO: rc: 1 Oct 30 00:59:33.414: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 00:59:43.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:59:43.562: INFO: rc: 1 Oct 30 00:59:43.562: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 00:59:53.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 00:59:53.722: INFO: rc: 1 Oct 30 00:59:53.722: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:00:03.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:00:03.861: INFO: rc: 1 Oct 30 01:00:03.861: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:00:13.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:00:13.996: INFO: rc: 1 Oct 30 01:00:13.996: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:00:23.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:00:24.134: INFO: rc: 1 Oct 30 01:00:24.134: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:00:34.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:00:34.277: INFO: rc: 1 Oct 30 01:00:34.277: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:00:44.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:00:44.425: INFO: rc: 1 Oct 30 01:00:44.425: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:00:54.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:00:54.554: INFO: rc: 1 Oct 30 01:00:54.554: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:01:04.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:01:04.713: INFO: rc: 1 Oct 30 01:01:04.713: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:01:14.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:01:14.871: INFO: rc: 1 Oct 30 01:01:14.871: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:01:24.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:01:25.034: INFO: rc: 1 Oct 30 01:01:25.034: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:01:35.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:01:35.179: INFO: rc: 1 Oct 30 01:01:35.179: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:01:45.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:01:45.328: INFO: rc: 1 Oct 30 01:01:45.328: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:01:55.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:01:55.476: INFO: rc: 1 Oct 30 01:01:55.476: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:02:05.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:02:05.628: INFO: rc: 1 Oct 30 01:02:05.628: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:02:15.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:02:15.788: INFO: rc: 1 Oct 30 01:02:15.788: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:02:25.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:02:25.920: INFO: rc: 1 Oct 30 01:02:25.920: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:02:35.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:02:36.075: INFO: rc: 1 Oct 30 01:02:36.075: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:02:46.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:02:46.225: INFO: rc: 1 Oct 30 01:02:46.225: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:02:56.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:02:56.385: INFO: rc: 1 Oct 30 01:02:56.385: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:03:06.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:03:06.538: INFO: rc: 1 Oct 30 01:03:06.539: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:03:16.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:03:16.692: INFO: rc: 1 Oct 30 01:03:16.692: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 30 01:03:26.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5797 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:03:26.833: INFO: rc: 1 Oct 30 01:03:26.833: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: Oct 30 01:03:26.833: INFO: Scaling statefulset ss to 0 Oct 30 01:03:26.844: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 30 01:03:26.846: INFO: Deleting all statefulset in ns statefulset-5797 Oct 30 01:03:26.849: INFO: Scaling statefulset ss to 0 Oct 30 01:03:26.857: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:03:26.860: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:26.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5797" for this suite. • [SLOW TEST:367.267 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":33,"skipped":458,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:24.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium Oct 30 01:03:24.159: INFO: Waiting up to 5m0s for pod "pod-87e3607c-b5ca-4b5b-a5f6-59ed275e8746" in namespace "emptydir-5807" to be "Succeeded or Failed" Oct 30 01:03:24.161: INFO: Pod "pod-87e3607c-b5ca-4b5b-a5f6-59ed275e8746": Phase="Pending", Reason="", readiness=false. Elapsed: 1.938215ms Oct 30 01:03:26.165: INFO: Pod "pod-87e3607c-b5ca-4b5b-a5f6-59ed275e8746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006120998s Oct 30 01:03:28.169: INFO: Pod "pod-87e3607c-b5ca-4b5b-a5f6-59ed275e8746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010212292s STEP: Saw pod success Oct 30 01:03:28.169: INFO: Pod "pod-87e3607c-b5ca-4b5b-a5f6-59ed275e8746" satisfied condition "Succeeded or Failed" Oct 30 01:03:28.172: INFO: Trying to get logs from node node2 pod pod-87e3607c-b5ca-4b5b-a5f6-59ed275e8746 container test-container: STEP: delete the pod Oct 30 01:03:28.188: INFO: Waiting for pod pod-87e3607c-b5ca-4b5b-a5f6-59ed275e8746 to disappear Oct 30 01:03:28.190: INFO: Pod pod-87e3607c-b5ca-4b5b-a5f6-59ed275e8746 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:28.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5807" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":722,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:26.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 30 01:03:26.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3245 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Oct 30 01:03:27.071: INFO: stderr: "" Oct 30 01:03:27.071: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Oct 30 01:03:27.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3245 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' Oct 30 01:03:27.472: INFO: stderr: "" Oct 30 01:03:27.472: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 30 01:03:27.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3245 delete pods e2e-test-httpd-pod' Oct 30 01:03:43.116: INFO: stderr: "" Oct 30 01:03:43.117: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:43.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3245" for this suite. • [SLOW TEST:16.220 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:22.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-7783 STEP: creating service affinity-clusterip-transition in namespace services-7783 STEP: creating replication controller affinity-clusterip-transition in namespace services-7783 I1030 01:03:22.290146 31 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-7783, replica count: 3 I1030 01:03:25.342611 31 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:03:28.343117 31 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 01:03:28.347: INFO: Creating new exec pod Oct 30 01:03:33.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7783 exec execpod-affinityctj44 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Oct 30 01:03:33.621: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip-transition 80\n+ echo hostName\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Oct 30 01:03:33.621: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:03:33.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7783 exec execpod-affinityctj44 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.62.58 80' Oct 30 01:03:33.874: INFO: stderr: "+ nc -v -t -w 2 10.233.62.58 80\nConnection to 10.233.62.58 80 port [tcp/http] succeeded!\n+ echo hostName\n" Oct 30 01:03:33.874: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:03:33.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7783 exec execpod-affinityctj44 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.62.58:80/ ; done' Oct 30 01:03:34.197: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n" Oct 30 01:03:34.197: INFO: stdout: "\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-gtrjc\naffinity-clusterip-transition-gtrjc\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-gtrjc\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-pcxm4\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-pcxm4\naffinity-clusterip-transition-gtrjc\naffinity-clusterip-transition-gtrjc\naffinity-clusterip-transition-pcxm4\naffinity-clusterip-transition-pcxm4\naffinity-clusterip-transition-pcxm4" Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-gtrjc Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-gtrjc Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-gtrjc Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-pcxm4 Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-pcxm4 Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-gtrjc Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-gtrjc Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-pcxm4 Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-pcxm4 Oct 30 01:03:34.197: INFO: Received response from host: affinity-clusterip-transition-pcxm4 Oct 30 01:03:34.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7783 exec execpod-affinityctj44 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.62.58:80/ ; done' Oct 30 01:03:34.508: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.62.58:80/\n" Oct 30 01:03:34.508: INFO: stdout: "\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp\naffinity-clusterip-transition-8p4sp" Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Received response from host: affinity-clusterip-transition-8p4sp Oct 30 01:03:34.508: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-7783, will wait for the garbage collector to delete the pods Oct 30 01:03:34.571: INFO: Deleting ReplicationController affinity-clusterip-transition took: 3.769307ms Oct 30 01:03:34.672: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.852846ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:43.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7783" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:20.929 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":652,"failed":0} SSSS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":34,"skipped":473,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:43.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added Oct 30 01:03:43.159: INFO: Found Service test-service-frzl4 in namespace services-7987 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Oct 30 01:03:43.159: INFO: Service test-service-frzl4 created STEP: Getting /status Oct 30 01:03:43.162: INFO: Service test-service-frzl4 has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Oct 30 01:03:43.167: INFO: observed Service test-service-frzl4 in namespace services-7987 with annotations: map[] & LoadBalancer: {[]} Oct 30 01:03:43.167: INFO: Found Service test-service-frzl4 in namespace services-7987 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Oct 30 01:03:43.167: INFO: Service test-service-frzl4 has service status patched STEP: updating the ServiceStatus Oct 30 01:03:43.172: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Oct 30 01:03:43.173: INFO: Observed Service test-service-frzl4 in namespace services-7987 with annotations: map[] & Conditions: {[]} Oct 30 01:03:43.173: INFO: Observed event: &Service{ObjectMeta:{test-service-frzl4 services-7987 93ae918e-8e7b-418e-88cf-a32f2854d201 76417 0 2021-10-30 01:03:43 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-10-30 01:03:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.233.18.44,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.233.18.44],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Oct 30 01:03:43.174: INFO: Found Service test-service-frzl4 in namespace services-7987 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Oct 30 01:03:43.174: INFO: Service test-service-frzl4 has service status updated STEP: patching the service STEP: watching for the Service to be patched Oct 30 01:03:43.185: INFO: observed Service test-service-frzl4 in namespace services-7987 with labels: map[test-service-static:true] Oct 30 01:03:43.185: INFO: observed Service test-service-frzl4 in namespace services-7987 with labels: map[test-service-static:true] Oct 30 01:03:43.185: INFO: observed Service test-service-frzl4 in namespace services-7987 with labels: map[test-service-static:true] Oct 30 01:03:43.185: INFO: Found Service test-service-frzl4 in namespace services-7987 with labels: map[test-service:patched test-service-static:true] Oct 30 01:03:43.185: INFO: Service test-service-frzl4 patched STEP: deleting the service STEP: watching for the Service to be deleted Oct 30 01:03:43.193: INFO: Observed event: ADDED Oct 30 01:03:43.193: INFO: Observed event: MODIFIED Oct 30 01:03:43.193: INFO: Observed event: MODIFIED Oct 30 01:03:43.193: INFO: Observed event: MODIFIED Oct 30 01:03:43.193: INFO: Found Service test-service-frzl4 in namespace services-7987 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Oct 30 01:03:43.193: INFO: Service test-service-frzl4 deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:43.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7987" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •S ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":35,"skipped":473,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:43.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:43.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5523" for this suite. •S ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":36,"skipped":478,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:43.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:43.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-905" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":32,"skipped":725,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:10.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-007b551b-a187-45bf-91be-2b9f7d1096e6 STEP: Creating secret with name s-test-opt-upd-c4f7e856-b09b-4a81-8564-80da0017f672 STEP: Creating the pod Oct 30 01:02:10.740: INFO: The status of Pod pod-projected-secrets-fbadb925-6dcd-407c-b85a-08769e3424fa is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:02:12.744: INFO: The status of Pod pod-projected-secrets-fbadb925-6dcd-407c-b85a-08769e3424fa is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:02:14.744: INFO: The status of Pod pod-projected-secrets-fbadb925-6dcd-407c-b85a-08769e3424fa is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:02:16.745: INFO: The status of Pod pod-projected-secrets-fbadb925-6dcd-407c-b85a-08769e3424fa is Running (Ready = true) STEP: Deleting secret s-test-opt-del-007b551b-a187-45bf-91be-2b9f7d1096e6 STEP: Updating secret s-test-opt-upd-c4f7e856-b09b-4a81-8564-80da0017f672 STEP: Creating secret with name s-test-opt-create-c31917bc-7492-4ffc-b66e-cfffce20c941 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:45.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4085" for this suite. • [SLOW TEST:95.117 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":521,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:43.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs Oct 30 01:03:43.398: INFO: Waiting up to 5m0s for pod "pod-7234d463-68d8-4062-afcf-156df68fff7f" in namespace "emptydir-4913" to be "Succeeded or Failed" Oct 30 01:03:43.401: INFO: Pod "pod-7234d463-68d8-4062-afcf-156df68fff7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.564524ms Oct 30 01:03:45.404: INFO: Pod "pod-7234d463-68d8-4062-afcf-156df68fff7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005655464s Oct 30 01:03:47.407: INFO: Pod "pod-7234d463-68d8-4062-afcf-156df68fff7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008962083s Oct 30 01:03:49.410: INFO: Pod "pod-7234d463-68d8-4062-afcf-156df68fff7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011913765s STEP: Saw pod success Oct 30 01:03:49.410: INFO: Pod "pod-7234d463-68d8-4062-afcf-156df68fff7f" satisfied condition "Succeeded or Failed" Oct 30 01:03:49.413: INFO: Trying to get logs from node node1 pod pod-7234d463-68d8-4062-afcf-156df68fff7f container test-container: STEP: delete the pod Oct 30 01:03:49.493: INFO: Waiting for pod pod-7234d463-68d8-4062-afcf-156df68fff7f to disappear Oct 30 01:03:49.497: INFO: Pod pod-7234d463-68d8-4062-afcf-156df68fff7f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:49.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4913" for this suite. • [SLOW TEST:6.138 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:45.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Oct 30 01:03:47.891: INFO: running pods: 0 < 3 Oct 30 01:03:49.896: INFO: running pods: 0 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:03:51.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-5397" for this suite. • [SLOW TEST:6.085 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":29,"skipped":523,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:51.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:03:51.949: INFO: Creating ReplicaSet my-hostname-basic-1440b639-47c7-4dfa-94ba-8e1b56dd4d9d Oct 30 01:03:51.955: INFO: Pod name my-hostname-basic-1440b639-47c7-4dfa-94ba-8e1b56dd4d9d: Found 0 pods out of 1 Oct 30 01:03:56.962: INFO: Pod name my-hostname-basic-1440b639-47c7-4dfa-94ba-8e1b56dd4d9d: Found 1 pods out of 1 Oct 30 01:03:56.962: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1440b639-47c7-4dfa-94ba-8e1b56dd4d9d" is running Oct 30 01:03:56.967: INFO: Pod "my-hostname-basic-1440b639-47c7-4dfa-94ba-8e1b56dd4d9d-25547" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 01:03:51 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 01:03:55 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 01:03:55 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 01:03:51 +0000 UTC Reason: Message:}]) Oct 30 01:03:56.967: INFO: Trying to dial the pod Oct 30 01:04:01.976: INFO: Controller my-hostname-basic-1440b639-47c7-4dfa-94ba-8e1b56dd4d9d: Got expected result from replica 1 [my-hostname-basic-1440b639-47c7-4dfa-94ba-8e1b56dd4d9d-25547]: "my-hostname-basic-1440b639-47c7-4dfa-94ba-8e1b56dd4d9d-25547", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:01.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9612" for this suite. • [SLOW TEST:10.056 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":30,"skipped":536,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:02.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 30 01:04:02.040: INFO: Waiting up to 5m0s for pod "pod-3790e0ca-c928-48fd-b742-339097adfd3e" in namespace "emptydir-4540" to be "Succeeded or Failed" Oct 30 01:04:02.042: INFO: Pod "pod-3790e0ca-c928-48fd-b742-339097adfd3e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.96324ms Oct 30 01:04:04.046: INFO: Pod "pod-3790e0ca-c928-48fd-b742-339097adfd3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006005674s Oct 30 01:04:06.049: INFO: Pod "pod-3790e0ca-c928-48fd-b742-339097adfd3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009732225s STEP: Saw pod success Oct 30 01:04:06.050: INFO: Pod "pod-3790e0ca-c928-48fd-b742-339097adfd3e" satisfied condition "Succeeded or Failed" Oct 30 01:04:06.055: INFO: Trying to get logs from node node1 pod pod-3790e0ca-c928-48fd-b742-339097adfd3e container test-container: STEP: delete the pod Oct 30 01:04:06.068: INFO: Waiting for pod pod-3790e0ca-c928-48fd-b742-339097adfd3e to disappear Oct 30 01:04:06.070: INFO: Pod pod-3790e0ca-c928-48fd-b742-339097adfd3e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:06.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4540" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":544,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":726,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:49.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:06.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-361" for this suite. • [SLOW TEST:17.070 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":34,"skipped":726,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:06.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:06.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2698" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":746,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:06.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:04:06.163: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:12.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2752" for this suite. • [SLOW TEST:6.044 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":32,"skipped":581,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:12.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:04:12.233: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b7dd40c4-a7ef-4f08-9789-2f15d5f52846" in namespace "security-context-test-8840" to be "Succeeded or Failed" Oct 30 01:04:12.235: INFO: Pod "busybox-readonly-false-b7dd40c4-a7ef-4f08-9789-2f15d5f52846": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046152ms Oct 30 01:04:14.239: INFO: Pod "busybox-readonly-false-b7dd40c4-a7ef-4f08-9789-2f15d5f52846": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005573227s Oct 30 01:04:16.242: INFO: Pod "busybox-readonly-false-b7dd40c4-a7ef-4f08-9789-2f15d5f52846": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008699817s Oct 30 01:04:16.242: INFO: Pod "busybox-readonly-false-b7dd40c4-a7ef-4f08-9789-2f15d5f52846" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:16.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8840" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":587,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:06.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5929 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5929 I1030 01:04:06.707983 31 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5929, replica count: 2 I1030 01:04:09.759241 31 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:04:12.759990 31 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 01:04:12.760: INFO: Creating new exec pod Oct 30 01:04:17.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5929 exec execpod7sclw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 30 01:04:18.129: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 30 01:04:18.129: INFO: stdout: "externalname-service-rzwlc" Oct 30 01:04:18.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5929 exec execpod7sclw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.26.243 80' Oct 30 01:04:18.379: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.26.243 80\nConnection to 10.233.26.243 80 port [tcp/http] succeeded!\n" Oct 30 01:04:18.379: INFO: stdout: "externalname-service-64n96" Oct 30 01:04:18.380: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:18.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5929" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:11.724 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":36,"skipped":754,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:18.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-e8b1910b-ec35-445d-923a-9cd7a678d06f [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:18.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6915" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":37,"skipped":758,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:16.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath Oct 30 01:04:16.324: INFO: Waiting up to 5m0s for pod "var-expansion-e30368ca-a7f0-48ea-90aa-719b8efcaed0" in namespace "var-expansion-9125" to be "Succeeded or Failed" Oct 30 01:04:16.326: INFO: Pod "var-expansion-e30368ca-a7f0-48ea-90aa-719b8efcaed0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.536805ms Oct 30 01:04:18.329: INFO: Pod "var-expansion-e30368ca-a7f0-48ea-90aa-719b8efcaed0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005206813s Oct 30 01:04:20.335: INFO: Pod "var-expansion-e30368ca-a7f0-48ea-90aa-719b8efcaed0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011688611s STEP: Saw pod success Oct 30 01:04:20.336: INFO: Pod "var-expansion-e30368ca-a7f0-48ea-90aa-719b8efcaed0" satisfied condition "Succeeded or Failed" Oct 30 01:04:20.338: INFO: Trying to get logs from node node2 pod var-expansion-e30368ca-a7f0-48ea-90aa-719b8efcaed0 container dapi-container: STEP: delete the pod Oct 30 01:04:20.349: INFO: Waiting for pod var-expansion-e30368ca-a7f0-48ea-90aa-719b8efcaed0 to disappear Oct 30 01:04:20.352: INFO: Pod var-expansion-e30368ca-a7f0-48ea-90aa-719b8efcaed0 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:20.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9125" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":34,"skipped":607,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:20.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:20.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6192" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":35,"skipped":623,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:18.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-bce005c7-5cfc-4367-9ce6-4661fcbf216a STEP: Creating a pod to test consume secrets Oct 30 01:04:18.473: INFO: Waiting up to 5m0s for pod "pod-secrets-a958146b-d697-4d2d-a3c7-c7fbf25aa79f" in namespace "secrets-2478" to be "Succeeded or Failed" Oct 30 01:04:18.476: INFO: Pod "pod-secrets-a958146b-d697-4d2d-a3c7-c7fbf25aa79f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.736249ms Oct 30 01:04:20.480: INFO: Pod "pod-secrets-a958146b-d697-4d2d-a3c7-c7fbf25aa79f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00657748s Oct 30 01:04:22.485: INFO: Pod "pod-secrets-a958146b-d697-4d2d-a3c7-c7fbf25aa79f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01146936s STEP: Saw pod success Oct 30 01:04:22.485: INFO: Pod "pod-secrets-a958146b-d697-4d2d-a3c7-c7fbf25aa79f" satisfied condition "Succeeded or Failed" Oct 30 01:04:22.487: INFO: Trying to get logs from node node2 pod pod-secrets-a958146b-d697-4d2d-a3c7-c7fbf25aa79f container secret-volume-test: STEP: delete the pod Oct 30 01:04:22.508: INFO: Waiting for pod pod-secrets-a958146b-d697-4d2d-a3c7-c7fbf25aa79f to disappear Oct 30 01:04:22.510: INFO: Pod pod-secrets-a958146b-d697-4d2d-a3c7-c7fbf25aa79f no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:22.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2478" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":759,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:20.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:04:20.522: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 30 01:04:29.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7413 --namespace=crd-publish-openapi-7413 create -f -' Oct 30 01:04:29.538: INFO: stderr: "" Oct 30 01:04:29.538: INFO: stdout: "e2e-test-crd-publish-openapi-5594-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 30 01:04:29.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7413 --namespace=crd-publish-openapi-7413 delete e2e-test-crd-publish-openapi-5594-crds test-cr' Oct 30 01:04:29.708: INFO: stderr: "" Oct 30 01:04:29.708: INFO: stdout: "e2e-test-crd-publish-openapi-5594-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Oct 30 01:04:29.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7413 --namespace=crd-publish-openapi-7413 apply -f -' Oct 30 01:04:30.036: INFO: stderr: "" Oct 30 01:04:30.036: INFO: stdout: "e2e-test-crd-publish-openapi-5594-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 30 01:04:30.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7413 --namespace=crd-publish-openapi-7413 delete e2e-test-crd-publish-openapi-5594-crds test-cr' Oct 30 01:04:30.183: INFO: stderr: "" Oct 30 01:04:30.183: INFO: stdout: "e2e-test-crd-publish-openapi-5594-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Oct 30 01:04:30.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7413 explain e2e-test-crd-publish-openapi-5594-crds' Oct 30 01:04:30.511: INFO: stderr: "" Oct 30 01:04:30.511: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5594-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:33.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7413" for this suite. • [SLOW TEST:13.506 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":36,"skipped":628,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:43.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Oct 30 01:03:49.326: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-168 PodName:var-expansion-0a2a41c1-fd41-41a5-b35b-1e787e1d93a3 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:03:49.326: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Oct 30 01:03:49.492: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-168 PodName:var-expansion-0a2a41c1-fd41-41a5-b35b-1e787e1d93a3 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:03:49.492: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Oct 30 01:03:50.200: INFO: Successfully updated pod "var-expansion-0a2a41c1-fd41-41a5-b35b-1e787e1d93a3" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Oct 30 01:03:50.203: INFO: Deleting pod "var-expansion-0a2a41c1-fd41-41a5-b35b-1e787e1d93a3" in namespace "var-expansion-168" Oct 30 01:03:50.209: INFO: Wait up to 5m0s for pod "var-expansion-0a2a41c1-fd41-41a5-b35b-1e787e1d93a3" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:34.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-168" for this suite. • [SLOW TEST:50.939 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":37,"skipped":479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:33.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-61f35554-8f4b-4b92-a431-819be75359d8 in namespace container-probe-8891 Oct 30 01:00:37.597: INFO: Started pod liveness-61f35554-8f4b-4b92-a431-819be75359d8 in namespace container-probe-8891 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 01:00:37.600: INFO: Initial restart count of pod liveness-61f35554-8f4b-4b92-a431-819be75359d8 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:38.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8891" for this suite. • [SLOW TEST:244.645 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:22.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 30 01:04:22.584: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:04:24.588: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:04:26.589: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 30 01:04:26.605: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:04:28.609: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:04:30.608: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 30 01:04:30.621: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 30 01:04:30.623: INFO: Pod pod-with-poststart-exec-hook still exists Oct 30 01:04:32.624: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 30 01:04:32.627: INFO: Pod pod-with-poststart-exec-hook still exists Oct 30 01:04:34.623: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 30 01:04:34.627: INFO: Pod pod-with-poststart-exec-hook still exists Oct 30 01:04:36.624: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 30 01:04:36.627: INFO: Pod pod-with-poststart-exec-hook still exists Oct 30 01:04:38.624: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 30 01:04:38.626: INFO: Pod pod-with-poststart-exec-hook still exists Oct 30 01:04:40.624: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 30 01:04:40.627: INFO: Pod pod-with-poststart-exec-hook still exists Oct 30 01:04:42.624: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 30 01:04:42.627: INFO: Pod pod-with-poststart-exec-hook still exists Oct 30 01:04:44.623: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 30 01:04:44.627: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:44.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2817" for this suite. • [SLOW TEST:22.086 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":777,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:02:55.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-95 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Oct 30 01:02:55.079: INFO: Found 0 stateful pods, waiting for 3 Oct 30 01:03:05.085: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:03:05.085: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:03:05.085: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Oct 30 01:03:05.111: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Oct 30 01:03:15.140: INFO: Updating stateful set ss2 Oct 30 01:03:15.145: INFO: Waiting for Pod statefulset-95/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted Oct 30 01:03:25.167: INFO: Found 1 stateful pods, waiting for 3 Oct 30 01:03:35.170: INFO: Found 2 stateful pods, waiting for 3 Oct 30 01:03:45.170: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:03:45.170: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:03:45.170: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 30 01:03:55.173: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:03:55.173: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:03:55.173: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Oct 30 01:03:55.195: INFO: Updating stateful set ss2 Oct 30 01:03:55.200: INFO: Waiting for Pod statefulset-95/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 30 01:04:05.224: INFO: Updating stateful set ss2 Oct 30 01:04:05.229: INFO: Waiting for StatefulSet statefulset-95/ss2 to complete update Oct 30 01:04:05.229: INFO: Waiting for Pod statefulset-95/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 30 01:04:15.237: INFO: Deleting all statefulset in ns statefulset-95 Oct 30 01:04:15.239: INFO: Scaling statefulset ss2 to 0 Oct 30 01:04:45.258: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:04:45.260: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:45.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-95" for this suite. • [SLOW TEST:110.230 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":42,"skipped":500,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:44.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 30 01:04:44.702: INFO: Waiting up to 5m0s for pod "downward-api-6a69af85-5f60-4b49-8ac4-f88642f47a00" in namespace "downward-api-9687" to be "Succeeded or Failed" Oct 30 01:04:44.705: INFO: Pod "downward-api-6a69af85-5f60-4b49-8ac4-f88642f47a00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.709141ms Oct 30 01:04:46.708: INFO: Pod "downward-api-6a69af85-5f60-4b49-8ac4-f88642f47a00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006455629s Oct 30 01:04:48.711: INFO: Pod "downward-api-6a69af85-5f60-4b49-8ac4-f88642f47a00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009172855s STEP: Saw pod success Oct 30 01:04:48.711: INFO: Pod "downward-api-6a69af85-5f60-4b49-8ac4-f88642f47a00" satisfied condition "Succeeded or Failed" Oct 30 01:04:48.714: INFO: Trying to get logs from node node2 pod downward-api-6a69af85-5f60-4b49-8ac4-f88642f47a00 container dapi-container: STEP: delete the pod Oct 30 01:04:48.727: INFO: Waiting for pod downward-api-6a69af85-5f60-4b49-8ac4-f88642f47a00 to disappear Oct 30 01:04:48.729: INFO: Pod downward-api-6a69af85-5f60-4b49-8ac4-f88642f47a00 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:48.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9687" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":795,"failed":0} Oct 30 01:04:48.739: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:38.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:04:38.284: INFO: Creating deployment "webserver-deployment" Oct 30 01:04:38.287: INFO: Waiting for observed generation 1 Oct 30 01:04:40.292: INFO: Waiting for all required pods to come up Oct 30 01:04:40.296: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Oct 30 01:04:50.305: INFO: Waiting for deployment "webserver-deployment" to complete Oct 30 01:04:50.310: INFO: Updating deployment "webserver-deployment" with a non-existent image Oct 30 01:04:50.316: INFO: Updating deployment webserver-deployment Oct 30 01:04:50.316: INFO: Waiting for observed generation 2 Oct 30 01:04:52.321: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Oct 30 01:04:52.324: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Oct 30 01:04:52.326: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 30 01:04:52.334: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Oct 30 01:04:52.334: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Oct 30 01:04:52.336: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 30 01:04:52.341: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Oct 30 01:04:52.341: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Oct 30 01:04:52.347: INFO: Updating deployment webserver-deployment Oct 30 01:04:52.347: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Oct 30 01:04:52.352: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Oct 30 01:04:52.354: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 30 01:04:52.359: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-2893 15931640-0c89-4e78-966f-2ff31dfa85db 77793 3 2021-10-30 01:04:38 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-30 01:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 01:04:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0051dbd18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-10-30 01:04:50 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-30 01:04:52 +0000 UTC,LastTransitionTime:2021-10-30 01:04:52 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Oct 30 01:04:52.362: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-2893 32d8bb02-9dcb-4416-95b0-461d6580aab9 77791 3 2021-10-30 01:04:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 15931640-0c89-4e78-966f-2ff31dfa85db 0xc003eba107 0xc003eba108}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:04:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15931640-0c89-4e78-966f-2ff31dfa85db\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003eba188 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:04:52.362: INFO: All old ReplicaSets of Deployment "webserver-deployment": Oct 30 01:04:52.363: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-2893 bba419c8-46e6-4c93-ba00-602a751681aa 77789 3 2021-10-30 01:04:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 15931640-0c89-4e78-966f-2ff31dfa85db 0xc003eba1e7 0xc003eba1e8}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:04:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15931640-0c89-4e78-966f-2ff31dfa85db\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003eba258 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:04:52.368: INFO: Pod "webserver-deployment-795d758f88-dmdf6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-dmdf6 webserver-deployment-795d758f88- deployment-2893 865fbae4-3a30-4845-8f2a-6e45417374f4 77681 0 2021-10-30 01:04:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 32d8bb02-9dcb-4416-95b0-461d6580aab9 0xc003eba6ef 0xc003eba700}] [] [{kube-controller-manager Update v1 2021-10-30 01:04:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32d8bb02-9dcb-4416-95b0-461d6580aab9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-59jc9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59jc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:04:52.368: INFO: Pod "webserver-deployment-795d758f88-jksbg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jksbg webserver-deployment-795d758f88- deployment-2893 31d25aad-a203-40ef-a12e-3ddc1c15a4a0 77713 0 2021-10-30 01:04:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 32d8bb02-9dcb-4416-95b0-461d6580aab9 0xc003eba86f 0xc003eba880}] [] [{kube-controller-manager Update v1 2021-10-30 01:04:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32d8bb02-9dcb-4416-95b0-461d6580aab9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-64jzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64jzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:04:52.369: INFO: Pod "webserver-deployment-795d758f88-jwxt4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jwxt4 webserver-deployment-795d758f88- deployment-2893 27b62699-8747-452e-bca4-92a03b18587f 77717 0 2021-10-30 01:04:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 32d8bb02-9dcb-4416-95b0-461d6580aab9 0xc003eba9ef 0xc003ebaa00}] [] [{kube-controller-manager Update v1 2021-10-30 01:04:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32d8bb02-9dcb-4416-95b0-461d6580aab9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-755bx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-755bx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:04:52.369: INFO: Pod "webserver-deployment-795d758f88-nbkx9" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-nbkx9 webserver-deployment-795d758f88- deployment-2893 155d11b9-e60e-455f-9945-aac3e1ea6e2c 77785 0 2021-10-30 01:04:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.101" ], "mac": "5a:78:eb:90:ed:4a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.101" ], "mac": "5a:78:eb:90:ed:4a", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 32d8bb02-9dcb-4416-95b0-461d6580aab9 0xc003ebab6f 0xc003ebab80}] [] [{kube-controller-manager Update v1 2021-10-30 01:04:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32d8bb02-9dcb-4416-95b0-461d6580aab9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-30 01:04:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2021-10-30 01:04:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}},"f:status":{"f:containerStatuses":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4b4pr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4b4pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-10-30 01:04:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:04:52.369: INFO: Pod "webserver-deployment-795d758f88-x2hbr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-x2hbr webserver-deployment-795d758f88- deployment-2893 d2338d79-a5f6-40e9-b8fd-fe8783dcdf1b 77708 0 2021-10-30 01:04:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 32d8bb02-9dcb-4416-95b0-461d6580aab9 0xc003ebad6f 0xc003ebad80}] [] [{kube-controller-manager Update v1 2021-10-30 01:04:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32d8bb02-9dcb-4416-95b0-461d6580aab9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-30 01:04:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gn48s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gn48s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-10-30 01:04:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:04:52.370: INFO: Pod "webserver-deployment-847dcfb7fb-2hh9f" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2hh9f webserver-deployment-847dcfb7fb- deployment-2893 974d4ac7-2e1f-41f2-b19e-a9b2cf9247b0 77640 0 2021-10-30 01:04:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.242" ], "mac": "7a:dd:00:ce:dd:00", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.242" ], "mac": "7a:dd:00:ce:dd:00", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bba419c8-46e6-4c93-ba00-602a751681aa 0xc003ebaf5f 0xc003ebaf70}] [] [{kube-controller-manager Update v1 2021-10-30 01:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bba419c8-46e6-4c93-ba00-602a751681aa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:04:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:04:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.242\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z6298,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6298,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.242,StartTime:2021-10-30 01:04:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:04:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://10e042b99dd71487035cb9f3fd5628c11bc62c27d7456297693eaacfbe9c0654,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.242,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:04:52.370: INFO: Pod "webserver-deployment-847dcfb7fb-427mc" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-427mc webserver-deployment-847dcfb7fb- deployment-2893 0c11f22c-1f14-425a-87eb-dc0c74e3e6b0 77794 0 2021-10-30 01:04:52 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bba419c8-46e6-4c93-ba00-602a751681aa 0xc003ebb15f 0xc003ebb170}] [] [{kube-controller-manager Update v1 2021-10-30 01:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bba419c8-46e6-4c93-ba00-602a751681aa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t5h7n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t5h7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:04:52.371: INFO: Pod "webserver-deployment-847dcfb7fb-46mjq" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-46mjq webserver-deployment-847dcfb7fb- deployment-2893 6b0aa5b0-9c6c-4aa1-adc1-22995e0445c8 77606 0 2021-10-30 01:04:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.99" ], "mac": "7e:e7:9a:cc:e5:39", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.99" ], "mac": "7e:e7:9a:cc:e5:39", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bba419c8-46e6-4c93-ba00-602a751681aa 0xc003ebb29f 0xc003ebb2b0}] [] [{kube-controller-manager Update v1 2021-10-30 01:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bba419c8-46e6-4c93-ba00-602a751681aa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:04:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:04:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.99\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hvsqc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hvsqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.99,StartTime:2021-10-30 01:04:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:04:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://2dfa604b37e56d0ab550ccc5462edde894aaefb396b9fd85f9a32cfa57bd395c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.99,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:04:52.371: INFO: Pod "webserver-deployment-847dcfb7fb-c95jl" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-c95jl webserver-deployment-847dcfb7fb- deployment-2893 ae46fd8c-5719-4c2d-87e8-e69eea3daecb 77578 0 2021-10-30 01:04:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.96" ], "mac": "e6:08:68:b5:74:4f", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.96" ], "mac": "e6:08:68:b5:74:4f", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bba419c8-46e6-4c93-ba00-602a751681aa 0xc003ebb49f 0xc003ebb4b0}] [] [{kube-controller-manager Update v1 2021-10-30 01:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bba419c8-46e6-4c93-ba00-602a751681aa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:04:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:04:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.96\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hsw84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hsw84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.96,StartTime:2021-10-30 01:04:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:04:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://1976d4e3f45760d77f50d6da28ca256f531723c7fba4e3506df64ece7d5d71be,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:04:52.372: INFO: Pod "webserver-deployment-847dcfb7fb-f8d8z" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-f8d8z webserver-deployment-847dcfb7fb- deployment-2893 3bfb3c3a-47ca-451a-bbad-a67b1446cfa1 77594 0 2021-10-30 01:04:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.97" ], "mac": "1a:22:1a:d4:06:0f", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.97" ], "mac": "1a:22:1a:d4:06:0f", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bba419c8-46e6-4c93-ba00-602a751681aa 0xc003ebb69f 0xc003ebb6b0}] [] [{kube-controller-manager Update v1 2021-10-30 01:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bba419c8-46e6-4c93-ba00-602a751681aa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:04:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:04:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.97\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ww9nr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ww9nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.97,StartTime:2021-10-30 01:04:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:04:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://503cb0af0c17227a052150158415ace3b7d0d4efe9d2ed7e3a2619e99b32d0cd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.97,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:04:52.372: INFO: Pod "webserver-deployment-847dcfb7fb-l8jqr" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-l8jqr webserver-deployment-847dcfb7fb- deployment-2893 e699018d-de1e-43ca-bc80-fc298ec0a687 77633 0 2021-10-30 01:04:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.245" ], "mac": "52:17:68:79:e6:f5", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.245" ], "mac": "52:17:68:79:e6:f5", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bba419c8-46e6-4c93-ba00-602a751681aa 0xc003ebb89f 0xc003ebb8b0}] [] [{kube-controller-manager Update v1 2021-10-30 01:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bba419c8-46e6-4c93-ba00-602a751681aa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:04:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:04:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.245\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j95wp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j95wp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.245,StartTime:2021-10-30 01:04:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:04:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://3d4e3c295a3bc9e7ac84bdc5d139c3f1e40a301ce95873e007062a3f774e1d93,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.245,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:04:52.372: INFO: Pod "webserver-deployment-847dcfb7fb-rps8x" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-rps8x webserver-deployment-847dcfb7fb- deployment-2893 c8ead8ff-ad96-4798-bc98-efeb271cebca 77631 0 2021-10-30 01:04:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.240" ], "mac": "7a:69:8b:d1:62:65", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.240" ], "mac": "7a:69:8b:d1:62:65", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bba419c8-46e6-4c93-ba00-602a751681aa 0xc003ebba9f 0xc003ebbab0}] [] [{kube-controller-manager Update v1 2021-10-30 01:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bba419c8-46e6-4c93-ba00-602a751681aa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:04:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:04:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.240\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wpd59,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wpd59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.240,StartTime:2021-10-30 01:04:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:04:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://53785bd5094eb83b3524cbb6938a64f6e883125e73a1243b7c4254273730d68e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.240,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:04:52.373: INFO: Pod "webserver-deployment-847dcfb7fb-vbrd8" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vbrd8 webserver-deployment-847dcfb7fb- deployment-2893 0c838cd9-624f-4e3a-898f-5c8bf78237d4 77576 0 2021-10-30 01:04:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.98" ], "mac": "d2:ba:05:a7:73:e7", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.98" ], "mac": "d2:ba:05:a7:73:e7", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bba419c8-46e6-4c93-ba00-602a751681aa 0xc003ebbc9f 0xc003ebbcb0}] [] [{kube-controller-manager Update v1 2021-10-30 01:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bba419c8-46e6-4c93-ba00-602a751681aa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:04:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:04:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.98\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wvmhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvmhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.98,StartTime:2021-10-30 01:04:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:04:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://71d07341b9d26dab8210c1e215af19a9188d84349e36b766da589a145434be3c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:04:52.374: INFO: Pod "webserver-deployment-847dcfb7fb-vckbr" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vckbr webserver-deployment-847dcfb7fb- deployment-2893 5397c931-e0f9-4315-babb-936c20aaafdb 77637 0 2021-10-30 01:04:38 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.241" ], "mac": "b6:00:3d:5f:0e:49", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.241" ], "mac": "b6:00:3d:5f:0e:49", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bba419c8-46e6-4c93-ba00-602a751681aa 0xc003ebbe9f 0xc003ebbeb0}] [] [{kube-controller-manager Update v1 2021-10-30 01:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bba419c8-46e6-4c93-ba00-602a751681aa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:04:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:04:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.241\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5jdx5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5jdx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:04:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.241,StartTime:2021-10-30 01:04:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:04:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://5ce052826997e2e17db0b7994a1e244cd488526ec6d2a4df73202ee3fe190724,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.241,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:52.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2893" for this suite. • [SLOW TEST:14.120 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":39,"skipped":626,"failed":0} Oct 30 01:04:52.390: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:28.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-9a5c9b45-567f-4e46-b86b-8afb2edbf205 STEP: Creating configMap with name cm-test-opt-upd-25c311f0-e431-4759-8155-6083d0e152f4 STEP: Creating the pod Oct 30 01:03:28.313: INFO: The status of Pod pod-configmaps-50e7923f-8102-4186-bb9c-1fca85b592d1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:03:30.317: INFO: The status of Pod pod-configmaps-50e7923f-8102-4186-bb9c-1fca85b592d1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:03:32.318: INFO: The status of Pod pod-configmaps-50e7923f-8102-4186-bb9c-1fca85b592d1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:03:34.317: INFO: The status of Pod pod-configmaps-50e7923f-8102-4186-bb9c-1fca85b592d1 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-9a5c9b45-567f-4e46-b86b-8afb2edbf205 STEP: Updating configmap cm-test-opt-upd-25c311f0-e431-4759-8155-6083d0e152f4 STEP: Creating configMap with name cm-test-opt-create-0037fe5b-20ff-457c-a99a-9269c2b87d52 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:04:58.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8830" for this suite. • [SLOW TEST:90.646 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":750,"failed":0} Oct 30 01:04:58.901: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:34.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-zvjt STEP: Creating a pod to test atomic-volume-subpath Oct 30 01:04:34.054: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-zvjt" in namespace "subpath-2879" to be "Succeeded or Failed" Oct 30 01:04:34.056: INFO: Pod "pod-subpath-test-projected-zvjt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363161ms Oct 30 01:04:36.059: INFO: Pod "pod-subpath-test-projected-zvjt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005382928s Oct 30 01:04:38.064: INFO: Pod "pod-subpath-test-projected-zvjt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010160384s Oct 30 01:04:40.068: INFO: Pod "pod-subpath-test-projected-zvjt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014430884s Oct 30 01:04:42.072: INFO: Pod "pod-subpath-test-projected-zvjt": Phase="Running", Reason="", readiness=true. Elapsed: 8.018483626s Oct 30 01:04:44.081: INFO: Pod "pod-subpath-test-projected-zvjt": Phase="Running", Reason="", readiness=true. Elapsed: 10.027001349s Oct 30 01:04:46.085: INFO: Pod "pod-subpath-test-projected-zvjt": Phase="Running", Reason="", readiness=true. Elapsed: 12.031194382s Oct 30 01:04:48.089: INFO: Pod "pod-subpath-test-projected-zvjt": Phase="Running", Reason="", readiness=true. Elapsed: 14.03511025s Oct 30 01:04:50.095: INFO: Pod "pod-subpath-test-projected-zvjt": Phase="Running", Reason="", readiness=true. Elapsed: 16.040921779s Oct 30 01:04:52.099: INFO: Pod "pod-subpath-test-projected-zvjt": Phase="Running", Reason="", readiness=true. Elapsed: 18.045429741s Oct 30 01:04:54.104: INFO: Pod "pod-subpath-test-projected-zvjt": Phase="Running", Reason="", readiness=true. Elapsed: 20.049821621s Oct 30 01:04:56.109: INFO: Pod "pod-subpath-test-projected-zvjt": Phase="Running", Reason="", readiness=true. Elapsed: 22.054932225s Oct 30 01:04:58.114: INFO: Pod "pod-subpath-test-projected-zvjt": Phase="Running", Reason="", readiness=true. Elapsed: 24.059795446s Oct 30 01:05:00.120: INFO: Pod "pod-subpath-test-projected-zvjt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.066408881s STEP: Saw pod success Oct 30 01:05:00.120: INFO: Pod "pod-subpath-test-projected-zvjt" satisfied condition "Succeeded or Failed" Oct 30 01:05:00.124: INFO: Trying to get logs from node node1 pod pod-subpath-test-projected-zvjt container test-container-subpath-projected-zvjt: STEP: delete the pod Oct 30 01:05:00.137: INFO: Waiting for pod pod-subpath-test-projected-zvjt to disappear Oct 30 01:05:00.139: INFO: Pod pod-subpath-test-projected-zvjt no longer exists STEP: Deleting pod pod-subpath-test-projected-zvjt Oct 30 01:05:00.139: INFO: Deleting pod "pod-subpath-test-projected-zvjt" in namespace "subpath-2879" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:05:00.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2879" for this suite. • [SLOW TEST:26.137 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":37,"skipped":630,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} Oct 30 01:05:00.153: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:17.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-7797c84e-bac9-42a2-8ead-22abb3e13db8 in namespace container-probe-2726 Oct 30 01:01:27.579: INFO: Started pod busybox-7797c84e-bac9-42a2-8ead-22abb3e13db8 in namespace container-probe-2726 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 01:01:27.584: INFO: Initial restart count of pod busybox-7797c84e-bac9-42a2-8ead-22abb3e13db8 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:05:28.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2726" for this suite. • [SLOW TEST:250.883 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":315,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} Oct 30 01:05:28.426: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:00:16.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1030 01:00:16.674690 22 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:06:00.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7918" for this suite. • [SLOW TEST:344.055 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":32,"skipped":487,"failed":0} Oct 30 01:06:00.708: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:04:34.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-997 STEP: creating service affinity-nodeport-transition in namespace services-997 STEP: creating replication controller affinity-nodeport-transition in namespace services-997 I1030 01:04:34.308112 33 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-997, replica count: 3 I1030 01:04:37.359366 33 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:04:40.359977 33 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:04:43.363188 33 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 01:04:43.375: INFO: Creating new exec pod Oct 30 01:04:52.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Oct 30 01:04:52.943: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Oct 30 01:04:52.943: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:04:52.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.59.34 80' Oct 30 01:04:53.194: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.59.34 80\nConnection to 10.233.59.34 80 port [tcp/http] succeeded!\n" Oct 30 01:04:53.194: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:04:53.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:04:53.562: INFO: rc: 1 Oct 30 01:04:53.562: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:04:54.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:04:54.818: INFO: rc: 1 Oct 30 01:04:54.818: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:04:55.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:04:55.784: INFO: rc: 1 Oct 30 01:04:55.784: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:04:56.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:04:56.810: INFO: rc: 1 Oct 30 01:04:56.810: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:04:57.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:04:57.821: INFO: rc: 1 Oct 30 01:04:57.821: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:04:58.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:04:59.030: INFO: rc: 1 Oct 30 01:04:59.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32437 + echo hostName nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:04:59.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:04:59.803: INFO: rc: 1 Oct 30 01:04:59.804: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:00.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:00.926: INFO: rc: 1 Oct 30 01:05:00.926: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:01.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:01.821: INFO: rc: 1 Oct 30 01:05:01.821: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:02.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:02.924: INFO: rc: 1 Oct 30 01:05:02.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32437 + echo hostName nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:03.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:03.818: INFO: rc: 1 Oct 30 01:05:03.819: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:04.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:04.809: INFO: rc: 1 Oct 30 01:05:04.809: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:05.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:05.820: INFO: rc: 1 Oct 30 01:05:05.820: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:06.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:06.807: INFO: rc: 1 Oct 30 01:05:06.807: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:07.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:07.811: INFO: rc: 1 Oct 30 01:05:07.811: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:08.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:08.803: INFO: rc: 1 Oct 30 01:05:08.803: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:09.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:09.805: INFO: rc: 1 Oct 30 01:05:09.805: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:10.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:10.797: INFO: rc: 1 Oct 30 01:05:10.797: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32437 + echo hostName nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:11.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:11.793: INFO: rc: 1 Oct 30 01:05:11.793: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:12.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:13.044: INFO: rc: 1 Oct 30 01:05:13.044: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:13.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:13.813: INFO: rc: 1 Oct 30 01:05:13.813: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:14.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:14.805: INFO: rc: 1 Oct 30 01:05:14.805: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:15.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:15.817: INFO: rc: 1 Oct 30 01:05:15.817: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:16.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:16.834: INFO: rc: 1 Oct 30 01:05:16.834: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:17.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:17.839: INFO: rc: 1 Oct 30 01:05:17.839: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:18.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:18.809: INFO: rc: 1 Oct 30 01:05:18.809: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:19.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:19.827: INFO: rc: 1 Oct 30 01:05:19.827: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:20.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:20.812: INFO: rc: 1 Oct 30 01:05:20.812: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:21.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:21.826: INFO: rc: 1 Oct 30 01:05:21.826: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:22.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:22.815: INFO: rc: 1 Oct 30 01:05:22.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:23.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:23.815: INFO: rc: 1 Oct 30 01:05:23.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:24.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:24.807: INFO: rc: 1 Oct 30 01:05:24.807: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:25.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:25.815: INFO: rc: 1 Oct 30 01:05:25.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:26.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:26.802: INFO: rc: 1 Oct 30 01:05:26.802: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:27.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:27.801: INFO: rc: 1 Oct 30 01:05:27.801: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:28.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:28.837: INFO: rc: 1 Oct 30 01:05:28.837: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:29.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:29.815: INFO: rc: 1 Oct 30 01:05:29.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:30.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:30.812: INFO: rc: 1 Oct 30 01:05:30.812: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:31.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:31.817: INFO: rc: 1 Oct 30 01:05:31.817: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:32.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:32.805: INFO: rc: 1 Oct 30 01:05:32.806: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:33.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:33.820: INFO: rc: 1 Oct 30 01:05:33.820: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:34.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:34.815: INFO: rc: 1 Oct 30 01:05:34.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:35.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:35.825: INFO: rc: 1 Oct 30 01:05:35.825: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:36.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:36.807: INFO: rc: 1 Oct 30 01:05:36.807: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:37.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:37.811: INFO: rc: 1 Oct 30 01:05:37.811: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32437 + echo hostName nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:38.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:38.799: INFO: rc: 1 Oct 30 01:05:38.799: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:39.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:39.815: INFO: rc: 1 Oct 30 01:05:39.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:40.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:40.824: INFO: rc: 1 Oct 30 01:05:40.825: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:41.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:41.796: INFO: rc: 1 Oct 30 01:05:41.796: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:42.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:43.227: INFO: rc: 1 Oct 30 01:05:43.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:43.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:43.802: INFO: rc: 1 Oct 30 01:05:43.802: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:44.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:44.842: INFO: rc: 1 Oct 30 01:05:44.842: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:45.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:45.791: INFO: rc: 1 Oct 30 01:05:45.792: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:46.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:46.831: INFO: rc: 1 Oct 30 01:05:46.831: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:47.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:47.818: INFO: rc: 1 Oct 30 01:05:47.818: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:48.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:48.812: INFO: rc: 1 Oct 30 01:05:48.812: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:49.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:49.825: INFO: rc: 1 Oct 30 01:05:49.825: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:50.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:50.813: INFO: rc: 1 Oct 30 01:05:50.813: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:51.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:51.819: INFO: rc: 1 Oct 30 01:05:51.819: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:52.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:52.824: INFO: rc: 1 Oct 30 01:05:52.825: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:53.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:53.871: INFO: rc: 1 Oct 30 01:05:53.871: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:54.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:57.217: INFO: rc: 1 Oct 30 01:05:57.217: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:57.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:57.818: INFO: rc: 1 Oct 30 01:05:57.818: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:05:58.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:05:59.692: INFO: rc: 1 Oct 30 01:05:59.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:00.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:00.819: INFO: rc: 1 Oct 30 01:06:00.819: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:01.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:02.102: INFO: rc: 1 Oct 30 01:06:02.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:02.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:02.801: INFO: rc: 1 Oct 30 01:06:02.801: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:03.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:03.802: INFO: rc: 1 Oct 30 01:06:03.802: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:04.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:04.809: INFO: rc: 1 Oct 30 01:06:04.809: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:05.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:05.806: INFO: rc: 1 Oct 30 01:06:05.806: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:06.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:06.813: INFO: rc: 1 Oct 30 01:06:06.813: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:07.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:07.820: INFO: rc: 1 Oct 30 01:06:07.820: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:08.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:08.824: INFO: rc: 1 Oct 30 01:06:08.824: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:09.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:09.805: INFO: rc: 1 Oct 30 01:06:09.805: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:10.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:10.847: INFO: rc: 1 Oct 30 01:06:10.848: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:11.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:11.839: INFO: rc: 1 Oct 30 01:06:11.839: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:12.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:12.832: INFO: rc: 1 Oct 30 01:06:12.832: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:13.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:13.816: INFO: rc: 1 Oct 30 01:06:13.816: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:14.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:14.829: INFO: rc: 1 Oct 30 01:06:14.830: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:15.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:15.798: INFO: rc: 1 Oct 30 01:06:15.798: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:16.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:16.837: INFO: rc: 1 Oct 30 01:06:16.837: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:17.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:17.827: INFO: rc: 1 Oct 30 01:06:17.827: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:18.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:18.815: INFO: rc: 1 Oct 30 01:06:18.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:19.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:19.813: INFO: rc: 1 Oct 30 01:06:19.813: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:20.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:20.833: INFO: rc: 1 Oct 30 01:06:20.833: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:21.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:21.818: INFO: rc: 1 Oct 30 01:06:21.818: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:22.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:22.805: INFO: rc: 1 Oct 30 01:06:22.805: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:23.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:23.841: INFO: rc: 1 Oct 30 01:06:23.841: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:24.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:24.790: INFO: rc: 1 Oct 30 01:06:24.790: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:25.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:25.814: INFO: rc: 1 Oct 30 01:06:25.814: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:26.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:26.807: INFO: rc: 1 Oct 30 01:06:26.807: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:27.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:27.805: INFO: rc: 1 Oct 30 01:06:27.805: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32437 + echo hostName nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:28.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:28.820: INFO: rc: 1 Oct 30 01:06:28.820: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:29.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:29.793: INFO: rc: 1 Oct 30 01:06:29.793: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:30.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:30.805: INFO: rc: 1 Oct 30 01:06:30.805: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:31.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:31.804: INFO: rc: 1 Oct 30 01:06:31.804: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:32.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:32.852: INFO: rc: 1 Oct 30 01:06:32.853: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32437 + echo hostName nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:33.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:33.796: INFO: rc: 1 Oct 30 01:06:33.796: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:34.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:34.808: INFO: rc: 1 Oct 30 01:06:34.808: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:35.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:35.798: INFO: rc: 1 Oct 30 01:06:35.798: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:36.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:36.827: INFO: rc: 1 Oct 30 01:06:36.827: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:37.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:37.794: INFO: rc: 1 Oct 30 01:06:37.794: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:38.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:38.817: INFO: rc: 1 Oct 30 01:06:38.817: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:39.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:39.805: INFO: rc: 1 Oct 30 01:06:39.805: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:40.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:40.831: INFO: rc: 1 Oct 30 01:06:40.831: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:41.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:41.812: INFO: rc: 1 Oct 30 01:06:41.812: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:42.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:43.103: INFO: rc: 1 Oct 30 01:06:43.104: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:43.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:43.821: INFO: rc: 1 Oct 30 01:06:43.821: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:44.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:44.793: INFO: rc: 1 Oct 30 01:06:44.794: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:45.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:45.813: INFO: rc: 1 Oct 30 01:06:45.813: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:46.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:46.838: INFO: rc: 1 Oct 30 01:06:46.838: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:47.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:47.840: INFO: rc: 1 Oct 30 01:06:47.840: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:48.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:48.805: INFO: rc: 1 Oct 30 01:06:48.805: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:49.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:49.807: INFO: rc: 1 Oct 30 01:06:49.807: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:50.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:50.798: INFO: rc: 1 Oct 30 01:06:50.798: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:51.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:51.797: INFO: rc: 1 Oct 30 01:06:51.797: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:52.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:52.814: INFO: rc: 1 Oct 30 01:06:52.814: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:53.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:53.803: INFO: rc: 1 Oct 30 01:06:53.803: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:53.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437' Oct 30 01:06:54.049: INFO: rc: 1 Oct 30 01:06:54.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-997 exec execpod-affinitychhtt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32437: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32437 nc: connect to 10.10.190.207 port 32437 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:06:54.050: FAIL: Unexpected error: <*errors.errorString | 0xc000ff9740>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32437 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32437 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001a1b8c0, 0x779f8f8, 0xc001f19760, 0xc00114cf00, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2527 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001902900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001902900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001902900, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 30 01:06:54.051: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-997, will wait for the garbage collector to delete the pods Oct 30 01:06:54.116: INFO: Deleting ReplicationController affinity-nodeport-transition took: 4.458201ms Oct 30 01:06:54.216: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.682554ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-997". STEP: Found 27 events. Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:34 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-8fccv Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:34 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-nlfgj Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:34 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-wvslf Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:34 +0000 UTC - event for affinity-nodeport-transition-8fccv: {default-scheduler } Scheduled: Successfully assigned services-997/affinity-nodeport-transition-8fccv to node1 Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:34 +0000 UTC - event for affinity-nodeport-transition-nlfgj: {default-scheduler } Scheduled: Successfully assigned services-997/affinity-nodeport-transition-nlfgj to node1 Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:34 +0000 UTC - event for affinity-nodeport-transition-wvslf: {default-scheduler } Scheduled: Successfully assigned services-997/affinity-nodeport-transition-wvslf to node2 Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:35 +0000 UTC - event for affinity-nodeport-transition-wvslf: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:36 +0000 UTC - event for affinity-nodeport-transition-nlfgj: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:36 +0000 UTC - event for affinity-nodeport-transition-wvslf: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 318.805342ms Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:36 +0000 UTC - event for affinity-nodeport-transition-wvslf: {kubelet node2} Created: Created container affinity-nodeport-transition Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:36 +0000 UTC - event for affinity-nodeport-transition-wvslf: {kubelet node2} Started: Started container affinity-nodeport-transition Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:37 +0000 UTC - event for affinity-nodeport-transition-8fccv: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 294.50978ms Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:37 +0000 UTC - event for affinity-nodeport-transition-8fccv: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:37 +0000 UTC - event for affinity-nodeport-transition-8fccv: {kubelet node1} Started: Started container affinity-nodeport-transition Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:37 +0000 UTC - event for affinity-nodeport-transition-8fccv: {kubelet node1} Created: Created container affinity-nodeport-transition Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:37 +0000 UTC - event for affinity-nodeport-transition-nlfgj: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 278.222078ms Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:37 +0000 UTC - event for affinity-nodeport-transition-nlfgj: {kubelet node1} Created: Created container affinity-nodeport-transition Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:37 +0000 UTC - event for affinity-nodeport-transition-nlfgj: {kubelet node1} Started: Started container affinity-nodeport-transition Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:43 +0000 UTC - event for execpod-affinitychhtt: {default-scheduler } Scheduled: Successfully assigned services-997/execpod-affinitychhtt to node1 Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:48 +0000 UTC - event for execpod-affinitychhtt: {kubelet node1} Started: Started container agnhost-container Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:48 +0000 UTC - event for execpod-affinitychhtt: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:48 +0000 UTC - event for execpod-affinitychhtt: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 574.379992ms Oct 30 01:07:02.934: INFO: At 2021-10-30 01:04:48 +0000 UTC - event for execpod-affinitychhtt: {kubelet node1} Created: Created container agnhost-container Oct 30 01:07:02.934: INFO: At 2021-10-30 01:06:54 +0000 UTC - event for affinity-nodeport-transition-8fccv: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Oct 30 01:07:02.934: INFO: At 2021-10-30 01:06:54 +0000 UTC - event for affinity-nodeport-transition-nlfgj: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Oct 30 01:07:02.934: INFO: At 2021-10-30 01:06:54 +0000 UTC - event for affinity-nodeport-transition-wvslf: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Oct 30 01:07:02.934: INFO: At 2021-10-30 01:06:54 +0000 UTC - event for execpod-affinitychhtt: {kubelet node1} Killing: Stopping container agnhost-container Oct 30 01:07:02.936: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:07:02.936: INFO: Oct 30 01:07:02.940: INFO: Logging node info for node master1 Oct 30 01:07:02.942: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 78427 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:59 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:59 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:59 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:06:59 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:07:02.943: INFO: Logging kubelet events for node master1 Oct 30 01:07:02.945: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 01:07:02.976: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:02.976: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:07:02.976: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:02.976: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 30 01:07:02.976: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:07:02.976: INFO: Init container install-cni ready: true, restart count 0 Oct 30 01:07:02.976: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:07:02.976: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:02.976: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:07:02.976: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:02.976: INFO: Container kube-scheduler ready: true, restart count 0 Oct 30 01:07:02.976: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:02.976: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:07:02.976: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:02.976: INFO: Container coredns ready: true, restart count 1 Oct 30 01:07:02.976: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 01:07:02.976: INFO: Container docker-registry ready: true, restart count 0 Oct 30 01:07:02.976: INFO: Container nginx ready: true, restart count 0 Oct 30 01:07:02.976: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:07:02.976: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:07:02.976: INFO: Container node-exporter ready: true, restart count 0 W1030 01:07:02.991424 33 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:07:03.061: INFO: Latency metrics for node master1 Oct 30 01:07:03.061: INFO: Logging node info for node master2 Oct 30 01:07:03.064: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 78407 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:56 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:56 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:56 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:06:56 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:07:03.065: INFO: Logging kubelet events for node master2 Oct 30 01:07:03.067: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 01:07:03.088: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:07:03.088: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:07:03.088: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:07:03.088: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.088: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:07:03.088: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.088: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 01:07:03.088: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.088: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:07:03.088: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.088: INFO: Container kube-proxy ready: true, restart count 2 Oct 30 01:07:03.088: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:07:03.088: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:07:03.088: INFO: Container kube-flannel ready: true, restart count 1 Oct 30 01:07:03.088: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.088: INFO: Container kube-multus ready: true, restart count 1 W1030 01:07:03.103319 33 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:07:03.162: INFO: Latency metrics for node master2 Oct 30 01:07:03.162: INFO: Logging node info for node master3 Oct 30 01:07:03.164: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 78400 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:54 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:54 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:54 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:06:54 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:07:03.164: INFO: Logging kubelet events for node master3 Oct 30 01:07:03.166: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 01:07:03.184: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.184: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:07:03.184: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.184: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:07:03.184: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.184: INFO: Container autoscaler ready: true, restart count 1 Oct 30 01:07:03.184: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.184: INFO: Container nfd-controller ready: true, restart count 0 Oct 30 01:07:03.184: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.184: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 30 01:07:03.184: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.184: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:07:03.184: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:07:03.184: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:07:03.184: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:07:03.184: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.184: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:07:03.184: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.184: INFO: Container coredns ready: true, restart count 1 Oct 30 01:07:03.184: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 01:07:03.184: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:07:03.184: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 01:07:03.184: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:07:03.184: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:07:03.184: INFO: Container node-exporter ready: true, restart count 0 W1030 01:07:03.199989 33 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:07:03.292: INFO: Latency metrics for node master3 Oct 30 01:07:03.292: INFO: Logging node info for node node1 Oct 30 01:07:03.296: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 78408 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:57 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:57 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:57 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:06:57 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:07:03.297: INFO: Logging kubelet events for node node1 Oct 30 01:07:03.300: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 01:07:03.319: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.319: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:07:03.319: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:07:03.319: INFO: Container collectd ready: true, restart count 0 Oct 30 01:07:03.319: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:07:03.319: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:07:03.319: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 01:07:03.319: INFO: Container discover ready: false, restart count 0 Oct 30 01:07:03.319: INFO: Container init ready: false, restart count 0 Oct 30 01:07:03.319: INFO: Container install ready: false, restart count 0 Oct 30 01:07:03.319: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 01:07:03.319: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:07:03.319: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:07:03.319: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:07:03.319: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:07:03.319: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:07:03.319: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 01:07:03.319: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:07:03.319: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:07:03.319: INFO: Container grafana ready: true, restart count 0 Oct 30 01:07:03.319: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:07:03.319: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.319: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:07:03.319: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.319: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:07:03.319: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.320: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:07:03.320: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.320: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:07:03.320: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:07:03.320: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:07:03.320: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:07:03.320: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.320: INFO: Container kube-sriovdp ready: true, restart count 0 W1030 01:07:03.334540 33 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:07:03.531: INFO: Latency metrics for node node1 Oct 30 01:07:03.531: INFO: Logging node info for node node2 Oct 30 01:07:03.535: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 78404 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:56 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:56 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:06:56 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:06:56 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:07:03.538: INFO: Logging kubelet events for node node2 Oct 30 01:07:03.541: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 01:07:03.559: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:07:03.559: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:07:03.559: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:07:03.559: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.559: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:07:03.559: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.559: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:07:03.559: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.559: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:07:03.559: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.559: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:07:03.559: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 01:07:03.559: INFO: Container discover ready: false, restart count 0 Oct 30 01:07:03.559: INFO: Container init ready: false, restart count 0 Oct 30 01:07:03.559: INFO: Container install ready: false, restart count 0 Oct 30 01:07:03.559: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.559: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:07:03.559: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 01:07:03.559: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:07:03.559: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:07:03.559: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:07:03.559: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:07:03.559: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:07:03.559: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.559: INFO: Container tas-extender ready: true, restart count 0 Oct 30 01:07:03.559: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:07:03.559: INFO: Container collectd ready: true, restart count 0 Oct 30 01:07:03.559: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:07:03.559: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:07:03.559: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.560: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:07:03.560: INFO: test-webserver-558157c6-5b25-466a-81cc-192973d6d1a6 started at 2021-10-30 01:03:18 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.560: INFO: Container test-webserver ready: true, restart count 0 Oct 30 01:07:03.560: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:07:03.560: INFO: Container nfd-worker ready: true, restart count 0 W1030 01:07:03.573326 33 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:07:03.737: INFO: Latency metrics for node node2 Oct 30 01:07:03.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-997" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [149.470 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:06:54.050: Unexpected error: <*errors.errorString | 0xc000ff9740>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32437 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32437 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":37,"skipped":505,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} Oct 30 01:07:03.755: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:03:18.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-558157c6-5b25-466a-81cc-192973d6d1a6 in namespace container-probe-786 Oct 30 01:03:22.468: INFO: Started pod test-webserver-558157c6-5b25-466a-81cc-192973d6d1a6 in namespace container-probe-786 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 01:03:22.471: INFO: Initial restart count of pod test-webserver-558157c6-5b25-466a-81cc-192973d6d1a6 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:07:23.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-786" for this suite. • [SLOW TEST:244.606 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":434,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} Oct 30 01:07:23.037: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:01:29.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-935 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-935 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-935 Oct 30 01:01:29.576: INFO: Found 0 stateful pods, waiting for 1 Oct 30 01:01:39.582: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Oct 30 01:01:39.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 01:01:39.817: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 01:01:39.817: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 01:01:39.817: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 01:01:39.819: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 30 01:01:49.823: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 30 01:01:49.823: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:01:49.835: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999394s Oct 30 01:01:50.838: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997329499s Oct 30 01:01:51.841: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.994625476s Oct 30 01:01:52.846: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.990994917s Oct 30 01:01:53.850: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.98591327s Oct 30 01:01:54.852: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.9826401s Oct 30 01:01:55.856: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.97897452s Oct 30 01:01:56.860: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.975854429s Oct 30 01:01:57.866: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.970042046s Oct 30 01:01:58.869: INFO: Verifying statefulset ss doesn't scale past 1 for another 966.029712ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-935 Oct 30 01:01:59.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:02:00.115: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 30 01:02:00.115: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 01:02:00.115: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 01:02:00.119: INFO: Found 1 stateful pods, waiting for 3 Oct 30 01:02:10.124: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:02:10.124: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:02:10.124: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Oct 30 01:02:10.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 01:02:10.351: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 01:02:10.351: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 01:02:10.351: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 01:02:10.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 01:02:10.655: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 01:02:10.655: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 01:02:10.655: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 01:02:10.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 01:02:10.892: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 01:02:10.892: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 01:02:10.892: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 01:02:10.892: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:02:10.895: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Oct 30 01:02:20.903: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 30 01:02:20.903: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 30 01:02:20.903: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 30 01:02:20.912: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999481s Oct 30 01:02:21.916: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996054685s Oct 30 01:02:22.920: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992814408s Oct 30 01:02:23.923: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988962784s Oct 30 01:02:24.928: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.985182372s Oct 30 01:02:25.932: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.980814218s Oct 30 01:02:26.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.976982776s Oct 30 01:02:27.943: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.971021593s Oct 30 01:02:28.945: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.966657816s Oct 30 01:02:29.950: INFO: Verifying statefulset ss doesn't scale past 3 for another 962.8951ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-935 Oct 30 01:02:30.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:02:33.252: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 30 01:02:33.252: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 01:02:33.252: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 01:02:33.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:02:33.501: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 30 01:02:33.501: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 01:02:33.501: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 01:02:33.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:02:35.065: INFO: rc: 1 Oct 30 01:02:35.065: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: Internal error occurred: error executing command in container: container not running (7857656faceb12d4cce4fd2fe292611dc13a4622a369b1c30e8a8e3b0f58264f) error: exit status 1 Oct 30 01:02:45.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:02:45.223: INFO: rc: 1 Oct 30 01:02:45.223: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:02:55.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:02:55.379: INFO: rc: 1 Oct 30 01:02:55.380: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:03:05.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:03:05.544: INFO: rc: 1 Oct 30 01:03:05.544: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:03:15.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:03:15.697: INFO: rc: 1 Oct 30 01:03:15.697: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:03:25.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:03:25.835: INFO: rc: 1 Oct 30 01:03:25.835: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:03:35.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:03:35.987: INFO: rc: 1 Oct 30 01:03:35.987: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:03:45.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:03:46.145: INFO: rc: 1 Oct 30 01:03:46.145: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:03:56.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:03:56.305: INFO: rc: 1 Oct 30 01:03:56.305: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:04:06.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:04:06.461: INFO: rc: 1 Oct 30 01:04:06.461: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:04:16.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:04:16.619: INFO: rc: 1 Oct 30 01:04:16.619: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:04:26.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:04:26.761: INFO: rc: 1 Oct 30 01:04:26.761: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:04:36.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:04:36.916: INFO: rc: 1 Oct 30 01:04:36.916: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:04:46.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:04:47.064: INFO: rc: 1 Oct 30 01:04:47.064: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:04:57.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:04:57.205: INFO: rc: 1 Oct 30 01:04:57.205: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:05:07.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:05:07.355: INFO: rc: 1 Oct 30 01:05:07.355: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:05:17.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:05:17.509: INFO: rc: 1 Oct 30 01:05:17.509: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:05:27.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:05:27.666: INFO: rc: 1 Oct 30 01:05:27.666: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:05:37.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:05:37.797: INFO: rc: 1 Oct 30 01:05:37.797: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:05:47.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:05:47.933: INFO: rc: 1 Oct 30 01:05:47.933: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:05:57.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:05:58.078: INFO: rc: 1 Oct 30 01:05:58.078: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:06:08.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:06:08.234: INFO: rc: 1 Oct 30 01:06:08.235: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:06:18.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:06:18.376: INFO: rc: 1 Oct 30 01:06:18.376: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:06:28.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:06:28.536: INFO: rc: 1 Oct 30 01:06:28.536: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:06:38.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:06:38.726: INFO: rc: 1 Oct 30 01:06:38.726: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:06:48.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:06:48.884: INFO: rc: 1 Oct 30 01:06:48.884: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:06:58.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:06:59.048: INFO: rc: 1 Oct 30 01:06:59.048: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:07:09.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:07:09.198: INFO: rc: 1 Oct 30 01:07:09.198: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:07:19.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:07:19.335: INFO: rc: 1 Oct 30 01:07:19.335: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:07:29.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:07:29.489: INFO: rc: 1 Oct 30 01:07:29.489: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 30 01:07:39.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-935 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:07:39.642: INFO: rc: 1 Oct 30 01:07:39.642: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Oct 30 01:07:39.642: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 30 01:07:39.660: INFO: Deleting all statefulset in ns statefulset-935 Oct 30 01:07:39.662: INFO: Scaling statefulset ss to 0 Oct 30 01:07:39.671: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:07:39.674: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:07:39.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-935" for this suite. • [SLOW TEST:370.149 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":20,"skipped":395,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} Oct 30 01:07:39.698: INFO: Running AfterSuite actions on all nodes Oct 30 01:04:45.305: INFO: Running AfterSuite actions on all nodes Oct 30 01:07:39.777: INFO: Running AfterSuite actions on node 1 Oct 30 01:07:39.777: INFO: Skipping dumping logs from cluster Summarizing 6 Failures: [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 [Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 Ran 320 of 5770 Specs in 901.422 seconds FAIL! -- 314 Passed | 6 Failed | 0 Pending | 5450 Skipped Ginkgo ran 1 suite in 15m3.027465377s Test Suite Failed