Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636764829 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Nov 13 00:53:51.579: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:53:51.584: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 13 00:53:51.612: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 00:53:51.673: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 00:53:51.673: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 00:53:51.673: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 00:53:51.673: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 00:53:51.673: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 13 00:53:51.697: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 13 00:53:51.697: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 13 00:53:51.697: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 13 00:53:51.697: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 13 00:53:51.697: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 13 00:53:51.697: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 13 00:53:51.697: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 13 00:53:51.697: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 13 00:53:51.697: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 13 00:53:51.697: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 13 00:53:51.697: INFO: e2e test version: v1.21.5 Nov 13 00:53:51.698: INFO: kube-apiserver version: v1.21.1 Nov 13 00:53:51.699: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:53:51.705: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Nov 13 00:53:51.705: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:53:51.727: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Nov 13 00:53:51.719: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:53:51.740: INFO: Cluster IP family: ipv4 S ------------------------------ Nov 13 00:53:51.721: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:53:51.742: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Nov 13 00:53:51.730: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:53:51.752: INFO: Cluster IP family: ipv4 S ------------------------------ Nov 13 00:53:51.730: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:53:51.754: INFO: Cluster IP family: ipv4 Nov 13 00:53:51.732: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:53:51.754: INFO: Cluster IP family: ipv4 SS ------------------------------ Nov 13 00:53:51.732: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:53:51.755: INFO: Cluster IP family: ipv4 Nov 13 00:53:51.732: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:53:51.755: INFO: Cluster IP family: ipv4 SSSSSSSSSS ------------------------------ Nov 13 00:53:51.741: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:53:51.762: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:53:51.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test W1113 00:53:51.782808 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 00:53:51.783: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 00:53:51.784: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:53:51.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6323" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:53:51.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob W1113 00:53:51.779656 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 00:53:51.779: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 00:53:51.781: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1113 00:53:51.786819 31 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching Nov 13 00:53:51.804: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Nov 13 00:53:51.808: INFO: starting watch STEP: patching STEP: updating Nov 13 00:53:51.825: INFO: waiting for watch events with expected annotations Nov 13 00:53:51.825: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:53:51.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-2239" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:53:51.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:53:51.934: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"cf207337-1009-407a-9a82-160dd66fd15c", Controller:(*bool)(0xc003c1829a), BlockOwnerDeletion:(*bool)(0xc003c1829b)}} Nov 13 00:53:51.937: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"eefcdd2e-1df0-46d3-9914-598d5a4c76d5", Controller:(*bool)(0xc004e1279a), BlockOwnerDeletion:(*bool)(0xc004e1279b)}} Nov 13 00:53:51.942: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"e955e195-880a-41dd-8e29-54edc66dc6aa", Controller:(*bool)(0xc000e10b8a), BlockOwnerDeletion:(*bool)(0xc000e10b8b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:53:56.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3906" for this suite. • [SLOW TEST:5.097 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:53:51.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W1113 00:53:51.813422 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 00:53:51.813: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 00:53:51.815: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 13 00:53:58.867: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:53:58.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7579" for this suite. • [SLOW TEST:7.111 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:53:51.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W1113 00:53:51.821891 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 00:53:51.822: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 00:53:51.824: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-d8cbe7ce-6ef6-40bd-8202-0d1ab6082bbc STEP: Creating a pod to test consume configMaps Nov 13 00:53:51.842: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fb7cbca3-f07a-411c-ad60-7158d106977f" in namespace "projected-785" to be "Succeeded or Failed" Nov 13 00:53:51.844: INFO: Pod "pod-projected-configmaps-fb7cbca3-f07a-411c-ad60-7158d106977f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082158ms Nov 13 00:53:53.848: INFO: Pod "pod-projected-configmaps-fb7cbca3-f07a-411c-ad60-7158d106977f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005628043s Nov 13 00:53:55.851: INFO: Pod "pod-projected-configmaps-fb7cbca3-f07a-411c-ad60-7158d106977f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00879055s Nov 13 00:53:57.856: INFO: Pod "pod-projected-configmaps-fb7cbca3-f07a-411c-ad60-7158d106977f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014399222s Nov 13 00:53:59.861: INFO: Pod "pod-projected-configmaps-fb7cbca3-f07a-411c-ad60-7158d106977f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019384161s Nov 13 00:54:01.865: INFO: Pod "pod-projected-configmaps-fb7cbca3-f07a-411c-ad60-7158d106977f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022724529s Nov 13 00:54:03.870: INFO: Pod "pod-projected-configmaps-fb7cbca3-f07a-411c-ad60-7158d106977f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.028360129s Nov 13 00:54:05.874: INFO: Pod "pod-projected-configmaps-fb7cbca3-f07a-411c-ad60-7158d106977f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.032575529s STEP: Saw pod success Nov 13 00:54:05.875: INFO: Pod "pod-projected-configmaps-fb7cbca3-f07a-411c-ad60-7158d106977f" satisfied condition "Succeeded or Failed" Nov 13 00:54:05.877: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-fb7cbca3-f07a-411c-ad60-7158d106977f container agnhost-container: STEP: delete the pod Nov 13 00:54:05.899: INFO: Waiting for pod pod-projected-configmaps-fb7cbca3-f07a-411c-ad60-7158d106977f to disappear Nov 13 00:54:05.902: INFO: Pod pod-projected-configmaps-fb7cbca3-f07a-411c-ad60-7158d106977f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:05.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-785" for this suite. • [SLOW TEST:14.133 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:05.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 13 00:54:05.970: INFO: Waiting up to 5m0s for pod "pod-7404d76c-83ec-47cf-a379-4ccb9b64f742" in namespace "emptydir-2593" to be "Succeeded or Failed" Nov 13 00:54:05.972: INFO: Pod "pod-7404d76c-83ec-47cf-a379-4ccb9b64f742": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32988ms Nov 13 00:54:07.975: INFO: Pod "pod-7404d76c-83ec-47cf-a379-4ccb9b64f742": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005211261s Nov 13 00:54:09.979: INFO: Pod "pod-7404d76c-83ec-47cf-a379-4ccb9b64f742": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009269477s STEP: Saw pod success Nov 13 00:54:09.979: INFO: Pod "pod-7404d76c-83ec-47cf-a379-4ccb9b64f742" satisfied condition "Succeeded or Failed" Nov 13 00:54:09.981: INFO: Trying to get logs from node node1 pod pod-7404d76c-83ec-47cf-a379-4ccb9b64f742 container test-container: STEP: delete the pod Nov 13 00:54:10.003: INFO: Waiting for pod pod-7404d76c-83ec-47cf-a379-4ccb9b64f742 to disappear Nov 13 00:54:10.004: INFO: Pod pod-7404d76c-83ec-47cf-a379-4ccb9b64f742 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:10.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2593" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:10.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 00:54:10.134: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b0b7244-370d-40f8-8111-271c6f12a5e6" in namespace "projected-2602" to be "Succeeded or Failed" Nov 13 00:54:10.139: INFO: Pod "downwardapi-volume-4b0b7244-370d-40f8-8111-271c6f12a5e6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.341241ms Nov 13 00:54:12.144: INFO: Pod "downwardapi-volume-4b0b7244-370d-40f8-8111-271c6f12a5e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009810122s Nov 13 00:54:14.150: INFO: Pod "downwardapi-volume-4b0b7244-370d-40f8-8111-271c6f12a5e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016326177s STEP: Saw pod success Nov 13 00:54:14.150: INFO: Pod "downwardapi-volume-4b0b7244-370d-40f8-8111-271c6f12a5e6" satisfied condition "Succeeded or Failed" Nov 13 00:54:14.153: INFO: Trying to get logs from node node1 pod downwardapi-volume-4b0b7244-370d-40f8-8111-271c6f12a5e6 container client-container: STEP: delete the pod Nov 13 00:54:14.164: INFO: Waiting for pod downwardapi-volume-4b0b7244-370d-40f8-8111-271c6f12a5e6 to disappear Nov 13 00:54:14.166: INFO: Pod downwardapi-volume-4b0b7244-370d-40f8-8111-271c6f12a5e6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:14.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2602" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":52,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:53:56.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 00:53:57.346: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 00:53:59.359: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:54:01.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:54:03.364: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:54:05.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361637, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 00:54:08.370: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:54:08.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8767-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:16.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8232" for this suite. STEP: Destroying namespace "webhook-8232-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.561 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:53:51.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test W1113 00:53:51.845120 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 00:53:51.845: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 00:53:51.848: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-7368 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 00:53:51.850: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 00:53:51.883: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:53:53.886: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:53:55.886: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:53:57.888: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:53:59.888: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:54:01.886: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:54:03.888: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:54:05.887: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:54:07.888: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:54:09.889: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:54:11.886: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 00:54:11.892: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 00:54:13.896: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 00:54:17.915: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Nov 13 00:54:17.915: INFO: Breadth first check of 10.244.3.18 on host 10.10.190.207... Nov 13 00:54:17.918: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.25:9080/dial?request=hostname&protocol=udp&host=10.244.3.18&port=8081&tries=1'] Namespace:pod-network-test-7368 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 00:54:17.918: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:54:18.099: INFO: Waiting for responses: map[] Nov 13 00:54:18.100: INFO: reached 10.244.3.18 after 0/1 tries Nov 13 00:54:18.100: INFO: Breadth first check of 10.244.4.188 on host 10.10.190.208... Nov 13 00:54:18.102: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.25:9080/dial?request=hostname&protocol=udp&host=10.244.4.188&port=8081&tries=1'] Namespace:pod-network-test-7368 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 00:54:18.102: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:54:18.206: INFO: Waiting for responses: map[] Nov 13 00:54:18.206: INFO: reached 10.244.4.188 after 0/1 tries Nov 13 00:54:18.206: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:18.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7368" for this suite. • [SLOW TEST:26.392 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:14.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command Nov 13 00:54:14.265: INFO: Waiting up to 5m0s for pod "var-expansion-5b3ca386-fc4d-4d6d-990e-5568b0d4fb38" in namespace "var-expansion-9828" to be "Succeeded or Failed" Nov 13 00:54:14.269: INFO: Pod "var-expansion-5b3ca386-fc4d-4d6d-990e-5568b0d4fb38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.662599ms Nov 13 00:54:16.273: INFO: Pod "var-expansion-5b3ca386-fc4d-4d6d-990e-5568b0d4fb38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008036129s Nov 13 00:54:18.277: INFO: Pod "var-expansion-5b3ca386-fc4d-4d6d-990e-5568b0d4fb38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012036289s STEP: Saw pod success Nov 13 00:54:18.277: INFO: Pod "var-expansion-5b3ca386-fc4d-4d6d-990e-5568b0d4fb38" satisfied condition "Succeeded or Failed" Nov 13 00:54:18.279: INFO: Trying to get logs from node node2 pod var-expansion-5b3ca386-fc4d-4d6d-990e-5568b0d4fb38 container dapi-container: STEP: delete the pod Nov 13 00:54:18.292: INFO: Waiting for pod var-expansion-5b3ca386-fc4d-4d6d-990e-5568b0d4fb38 to disappear Nov 13 00:54:18.294: INFO: Pod var-expansion-5b3ca386-fc4d-4d6d-990e-5568b0d4fb38 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:18.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9828" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":78,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:18.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:18.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7639" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":5,"skipped":105,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:16.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:54:16.625: INFO: The status of Pod pod-secrets-e0451e68-8b98-4afd-ad3c-51a4b59b5c1b is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:18.630: INFO: The status of Pod pod-secrets-e0451e68-8b98-4afd-ad3c-51a4b59b5c1b is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:20.629: INFO: The status of Pod pod-secrets-e0451e68-8b98-4afd-ad3c-51a4b59b5c1b is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:20.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5846" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":4,"skipped":31,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:53:51.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns W1113 00:53:51.827493 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 00:53:51.827: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 00:53:51.829: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-117.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-117.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-117.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-117.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-117.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-117.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-117.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-117.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-117.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-117.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 13 00:54:15.873: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local from pod dns-117/dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d: the server could not find the requested resource (get pods dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d) Nov 13 00:54:15.876: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local from pod dns-117/dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d: the server could not find the requested resource (get pods dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d) Nov 13 00:54:15.879: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-117.svc.cluster.local from pod dns-117/dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d: the server could not find the requested resource (get pods dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d) Nov 13 00:54:15.882: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-117.svc.cluster.local from pod dns-117/dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d: the server could not find the requested resource (get pods dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d) Nov 13 00:54:15.890: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local from pod dns-117/dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d: the server could not find the requested resource (get pods dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d) Nov 13 00:54:15.892: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local from pod dns-117/dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d: the server could not find the requested resource (get pods dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d) Nov 13 00:54:15.895: INFO: Unable to read jessie_udp@dns-test-service-2.dns-117.svc.cluster.local from pod dns-117/dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d: the server could not find the requested resource (get pods dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d) Nov 13 00:54:15.897: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-117.svc.cluster.local from pod dns-117/dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d: the server could not find the requested resource (get pods dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d) Nov 13 00:54:15.903: INFO: Lookups using dns-117/dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local wheezy_udp@dns-test-service-2.dns-117.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-117.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-117.svc.cluster.local jessie_udp@dns-test-service-2.dns-117.svc.cluster.local jessie_tcp@dns-test-service-2.dns-117.svc.cluster.local] Nov 13 00:54:20.935: INFO: DNS probes using dns-117/dns-test-9dfaf1b7-ea55-4456-88d8-328e055fcc3d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:20.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-117" for this suite. • [SLOW TEST:29.154 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:53:51.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W1113 00:53:51.805637 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 00:53:51.805: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 00:53:51.807: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-1918 STEP: creating service affinity-clusterip-transition in namespace services-1918 STEP: creating replication controller affinity-clusterip-transition in namespace services-1918 I1113 00:53:51.822496 27 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-1918, replica count: 3 I1113 00:53:54.874410 27 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:53:57.875317 27 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:54:00.878128 27 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:54:03.879771 27 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 13 00:54:03.884: INFO: Creating new exec pod Nov 13 00:54:08.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1918 exec execpod-affinity8pk97 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Nov 13 00:54:09.139: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Nov 13 00:54:09.139: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 13 00:54:09.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1918 exec execpod-affinity8pk97 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.36.235 80' Nov 13 00:54:09.764: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.36.235 80\nConnection to 10.233.36.235 80 port [tcp/http] succeeded!\n" Nov 13 00:54:09.765: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 13 00:54:09.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1918 exec execpod-affinity8pk97 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.36.235:80/ ; done' Nov 13 00:54:10.407: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n" Nov 13 00:54:10.407: INFO: stdout: "\naffinity-clusterip-transition-6x8fg\naffinity-clusterip-transition-6x8fg\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-6x8fg\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-6x8fg\naffinity-clusterip-transition-6x8fg\naffinity-clusterip-transition-6x8fg\naffinity-clusterip-transition-ddshx\naffinity-clusterip-transition-ddshx\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-6x8fg\naffinity-clusterip-transition-6x8fg\naffinity-clusterip-transition-6x8fg\naffinity-clusterip-transition-6x8fg" Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-6x8fg Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-6x8fg Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-6x8fg Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-6x8fg Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-6x8fg Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-6x8fg Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-ddshx Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-ddshx Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-6x8fg Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-6x8fg Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-6x8fg Nov 13 00:54:10.407: INFO: Received response from host: affinity-clusterip-transition-6x8fg Nov 13 00:54:10.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1918 exec execpod-affinity8pk97 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.36.235:80/ ; done' Nov 13 00:54:10.921: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.36.235:80/\n" Nov 13 00:54:10.921: INFO: stdout: "\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb\naffinity-clusterip-transition-ztbjb" Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Received response from host: affinity-clusterip-transition-ztbjb Nov 13 00:54:10.921: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-1918, will wait for the garbage collector to delete the pods Nov 13 00:54:10.986: INFO: Deleting ReplicationController affinity-clusterip-transition took: 3.275052ms Nov 13 00:54:11.086: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.243728ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:21.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1918" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:29.936 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:18.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-c9de9eec-2641-4d18-9608-edb46de0fdb2 STEP: Creating a pod to test consume secrets Nov 13 00:54:18.462: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-44e3e2c8-6460-4614-9c2f-31228bc09433" in namespace "projected-62" to be "Succeeded or Failed" Nov 13 00:54:18.464: INFO: Pod "pod-projected-secrets-44e3e2c8-6460-4614-9c2f-31228bc09433": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290527ms Nov 13 00:54:20.467: INFO: Pod "pod-projected-secrets-44e3e2c8-6460-4614-9c2f-31228bc09433": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005454109s Nov 13 00:54:22.473: INFO: Pod "pod-projected-secrets-44e3e2c8-6460-4614-9c2f-31228bc09433": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01063389s Nov 13 00:54:24.481: INFO: Pod "pod-projected-secrets-44e3e2c8-6460-4614-9c2f-31228bc09433": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018525285s STEP: Saw pod success Nov 13 00:54:24.481: INFO: Pod "pod-projected-secrets-44e3e2c8-6460-4614-9c2f-31228bc09433" satisfied condition "Succeeded or Failed" Nov 13 00:54:24.483: INFO: Trying to get logs from node node2 pod pod-projected-secrets-44e3e2c8-6460-4614-9c2f-31228bc09433 container projected-secret-volume-test: STEP: delete the pod Nov 13 00:54:24.613: INFO: Waiting for pod pod-projected-secrets-44e3e2c8-6460-4614-9c2f-31228bc09433 to disappear Nov 13 00:54:24.615: INFO: Pod pod-projected-secrets-44e3e2c8-6460-4614-9c2f-31228bc09433 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:24.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-62" for this suite. • [SLOW TEST:6.201 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":133,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:53:51.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1113 00:53:51.829775 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 00:53:51.830: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 00:53:51.831: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-86ab5f7a-2524-450e-964f-97f34d4a140e in namespace container-probe-6569 Nov 13 00:54:05.862: INFO: Started pod liveness-86ab5f7a-2524-450e-964f-97f34d4a140e in namespace container-probe-6569 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 00:54:05.864: INFO: Initial restart count of pod liveness-86ab5f7a-2524-450e-964f-97f34d4a140e is 0 Nov 13 00:54:25.912: INFO: Restart count of pod container-probe-6569/liveness-86ab5f7a-2524-450e-964f-97f34d4a140e is now 1 (20.048510777s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:25.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6569" for this suite. • [SLOW TEST:34.126 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:26.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 13 00:54:26.059: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Nov 13 00:54:26.063: INFO: starting watch STEP: patching STEP: updating Nov 13 00:54:26.073: INFO: waiting for watch events with expected annotations Nov 13 00:54:26.073: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:26.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-3471" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":2,"skipped":55,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:26.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:28.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-5052" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":3,"skipped":81,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:21.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:54:21.863: INFO: The status of Pod busybox-scheduling-8bc432eb-6593-4158-ba34-15daf6ed4c89 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:23.867: INFO: The status of Pod busybox-scheduling-8bc432eb-6593-4158-ba34-15daf6ed4c89 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:25.867: INFO: The status of Pod busybox-scheduling-8bc432eb-6593-4158-ba34-15daf6ed4c89 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:27.868: INFO: The status of Pod busybox-scheduling-8bc432eb-6593-4158-ba34-15daf6ed4c89 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:28.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2364" for this suite. • [SLOW TEST:6.561 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:20.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-6330e8a3-f50d-44bb-a446-863b8da944cd STEP: Creating a pod to test consume configMaps Nov 13 00:54:21.019: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ddbf2268-a2e7-4693-89b5-1b467602d985" in namespace "projected-747" to be "Succeeded or Failed" Nov 13 00:54:21.022: INFO: Pod "pod-projected-configmaps-ddbf2268-a2e7-4693-89b5-1b467602d985": Phase="Pending", Reason="", readiness=false. Elapsed: 2.600286ms Nov 13 00:54:23.026: INFO: Pod "pod-projected-configmaps-ddbf2268-a2e7-4693-89b5-1b467602d985": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006616535s Nov 13 00:54:25.029: INFO: Pod "pod-projected-configmaps-ddbf2268-a2e7-4693-89b5-1b467602d985": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009707983s Nov 13 00:54:27.034: INFO: Pod "pod-projected-configmaps-ddbf2268-a2e7-4693-89b5-1b467602d985": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01501239s STEP: Saw pod success Nov 13 00:54:27.034: INFO: Pod "pod-projected-configmaps-ddbf2268-a2e7-4693-89b5-1b467602d985" satisfied condition "Succeeded or Failed" Nov 13 00:54:27.037: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-ddbf2268-a2e7-4693-89b5-1b467602d985 container agnhost-container: STEP: delete the pod Nov 13 00:54:28.363: INFO: Waiting for pod pod-projected-configmaps-ddbf2268-a2e7-4693-89b5-1b467602d985 to disappear Nov 13 00:54:28.365: INFO: Pod pod-projected-configmaps-ddbf2268-a2e7-4693-89b5-1b467602d985 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:28.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-747" for this suite. • [SLOW TEST:7.390 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":30,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:53:51.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl W1113 00:53:51.769465 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 00:53:51.769: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 00:53:51.778: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Nov 13 00:53:51.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 create -f -' Nov 13 00:53:52.246: INFO: stderr: "" Nov 13 00:53:52.246: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 13 00:53:52.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 13 00:53:52.402: INFO: stderr: "" Nov 13 00:53:52.402: INFO: stdout: "update-demo-nautilus-624px update-demo-nautilus-zvbf2 " Nov 13 00:53:52.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods update-demo-nautilus-624px -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 13 00:53:52.570: INFO: stderr: "" Nov 13 00:53:52.570: INFO: stdout: "" Nov 13 00:53:52.570: INFO: update-demo-nautilus-624px is created but not running Nov 13 00:53:57.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 13 00:53:57.746: INFO: stderr: "" Nov 13 00:53:57.746: INFO: stdout: "update-demo-nautilus-624px update-demo-nautilus-zvbf2 " Nov 13 00:53:57.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods update-demo-nautilus-624px -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 13 00:53:57.930: INFO: stderr: "" Nov 13 00:53:57.930: INFO: stdout: "" Nov 13 00:53:57.930: INFO: update-demo-nautilus-624px is created but not running Nov 13 00:54:02.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 13 00:54:03.105: INFO: stderr: "" Nov 13 00:54:03.105: INFO: stdout: "update-demo-nautilus-624px update-demo-nautilus-zvbf2 " Nov 13 00:54:03.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods update-demo-nautilus-624px -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 13 00:54:03.298: INFO: stderr: "" Nov 13 00:54:03.298: INFO: stdout: "" Nov 13 00:54:03.298: INFO: update-demo-nautilus-624px is created but not running Nov 13 00:54:08.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 13 00:54:08.474: INFO: stderr: "" Nov 13 00:54:08.474: INFO: stdout: "update-demo-nautilus-624px update-demo-nautilus-zvbf2 " Nov 13 00:54:08.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods update-demo-nautilus-624px -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 13 00:54:08.631: INFO: stderr: "" Nov 13 00:54:08.631: INFO: stdout: "true" Nov 13 00:54:08.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods update-demo-nautilus-624px -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 13 00:54:08.810: INFO: stderr: "" Nov 13 00:54:08.810: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 13 00:54:08.810: INFO: validating pod update-demo-nautilus-624px Nov 13 00:54:08.814: INFO: got data: { "image": "nautilus.jpg" } Nov 13 00:54:08.814: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 13 00:54:08.814: INFO: update-demo-nautilus-624px is verified up and running Nov 13 00:54:08.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods update-demo-nautilus-zvbf2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 13 00:54:08.969: INFO: stderr: "" Nov 13 00:54:08.969: INFO: stdout: "true" Nov 13 00:54:08.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods update-demo-nautilus-zvbf2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 13 00:54:09.122: INFO: stderr: "" Nov 13 00:54:09.122: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 13 00:54:09.122: INFO: validating pod update-demo-nautilus-zvbf2 Nov 13 00:54:09.125: INFO: got data: { "image": "nautilus.jpg" } Nov 13 00:54:09.126: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 13 00:54:09.126: INFO: update-demo-nautilus-zvbf2 is verified up and running STEP: scaling down the replication controller Nov 13 00:54:09.134: INFO: scanned /root for discovery docs: Nov 13 00:54:09.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Nov 13 00:54:09.361: INFO: stderr: "" Nov 13 00:54:09.361: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 13 00:54:09.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 13 00:54:09.532: INFO: stderr: "" Nov 13 00:54:09.533: INFO: stdout: "update-demo-nautilus-624px update-demo-nautilus-zvbf2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Nov 13 00:54:14.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 13 00:54:14.722: INFO: stderr: "" Nov 13 00:54:14.722: INFO: stdout: "update-demo-nautilus-624px update-demo-nautilus-zvbf2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Nov 13 00:54:19.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 13 00:54:19.913: INFO: stderr: "" Nov 13 00:54:19.913: INFO: stdout: "update-demo-nautilus-624px update-demo-nautilus-zvbf2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Nov 13 00:54:24.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 13 00:54:25.103: INFO: stderr: "" Nov 13 00:54:25.103: INFO: stdout: "update-demo-nautilus-zvbf2 " Nov 13 00:54:25.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods update-demo-nautilus-zvbf2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 13 00:54:25.282: INFO: stderr: "" Nov 13 00:54:25.282: INFO: stdout: "true" Nov 13 00:54:25.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods update-demo-nautilus-zvbf2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 13 00:54:25.444: INFO: stderr: "" Nov 13 00:54:25.444: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 13 00:54:25.444: INFO: validating pod update-demo-nautilus-zvbf2 Nov 13 00:54:25.447: INFO: got data: { "image": "nautilus.jpg" } Nov 13 00:54:25.447: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 13 00:54:25.447: INFO: update-demo-nautilus-zvbf2 is verified up and running STEP: scaling up the replication controller Nov 13 00:54:25.455: INFO: scanned /root for discovery docs: Nov 13 00:54:25.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Nov 13 00:54:25.683: INFO: stderr: "" Nov 13 00:54:25.683: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 13 00:54:25.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 13 00:54:25.868: INFO: stderr: "" Nov 13 00:54:25.868: INFO: stdout: "update-demo-nautilus-kpmd2 update-demo-nautilus-zvbf2 " Nov 13 00:54:25.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods update-demo-nautilus-kpmd2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 13 00:54:26.041: INFO: stderr: "" Nov 13 00:54:26.041: INFO: stdout: "" Nov 13 00:54:26.041: INFO: update-demo-nautilus-kpmd2 is created but not running Nov 13 00:54:31.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 13 00:54:31.226: INFO: stderr: "" Nov 13 00:54:31.226: INFO: stdout: "update-demo-nautilus-kpmd2 update-demo-nautilus-zvbf2 " Nov 13 00:54:31.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods update-demo-nautilus-kpmd2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 13 00:54:31.397: INFO: stderr: "" Nov 13 00:54:31.397: INFO: stdout: "true" Nov 13 00:54:31.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods update-demo-nautilus-kpmd2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 13 00:54:31.580: INFO: stderr: "" Nov 13 00:54:31.580: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 13 00:54:31.580: INFO: validating pod update-demo-nautilus-kpmd2 Nov 13 00:54:31.584: INFO: got data: { "image": "nautilus.jpg" } Nov 13 00:54:31.584: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 13 00:54:31.584: INFO: update-demo-nautilus-kpmd2 is verified up and running Nov 13 00:54:31.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods update-demo-nautilus-zvbf2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 13 00:54:31.759: INFO: stderr: "" Nov 13 00:54:31.759: INFO: stdout: "true" Nov 13 00:54:31.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods update-demo-nautilus-zvbf2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 13 00:54:31.929: INFO: stderr: "" Nov 13 00:54:31.930: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 13 00:54:31.930: INFO: validating pod update-demo-nautilus-zvbf2 Nov 13 00:54:31.933: INFO: got data: { "image": "nautilus.jpg" } Nov 13 00:54:31.933: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 13 00:54:31.933: INFO: update-demo-nautilus-zvbf2 is verified up and running STEP: using delete to clean up resources Nov 13 00:54:31.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 delete --grace-period=0 --force -f -' Nov 13 00:54:32.068: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 13 00:54:32.068: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Nov 13 00:54:32.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get rc,svc -l name=update-demo --no-headers' Nov 13 00:54:32.268: INFO: stderr: "No resources found in kubectl-5425 namespace.\n" Nov 13 00:54:32.268: INFO: stdout: "" Nov 13 00:54:32.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5425 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 13 00:54:32.445: INFO: stderr: "" Nov 13 00:54:32.445: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:32.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5425" for this suite. • [SLOW TEST:40.703 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:53:58.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Nov 13 00:53:58.933: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:00.937: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:02.938: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:04.937: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:06.936: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:08.936: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Nov 13 00:54:08.952: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:10.955: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:12.958: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:14.956: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:16.956: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Nov 13 00:54:17.032: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 13 00:54:17.035: INFO: Pod pod-with-poststart-http-hook still exists Nov 13 00:54:19.038: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 13 00:54:19.041: INFO: Pod pod-with-poststart-http-hook still exists Nov 13 00:54:21.036: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 13 00:54:21.038: INFO: Pod pod-with-poststart-http-hook still exists Nov 13 00:54:23.037: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 13 00:54:23.040: INFO: Pod pod-with-poststart-http-hook still exists Nov 13 00:54:25.036: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 13 00:54:25.039: INFO: Pod pod-with-poststart-http-hook still exists Nov 13 00:54:27.035: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 13 00:54:27.038: INFO: Pod pod-with-poststart-http-hook still exists Nov 13 00:54:29.038: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 13 00:54:29.041: INFO: Pod pod-with-poststart-http-hook still exists Nov 13 00:54:31.038: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 13 00:54:31.041: INFO: Pod pod-with-poststart-http-hook still exists Nov 13 00:54:33.038: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 13 00:54:33.041: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:33.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6056" for this suite. • [SLOW TEST:34.156 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:32.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:54:32.493: INFO: Creating pod... Nov 13 00:54:32.505: INFO: Pod Quantity: 1 Status: Pending Nov 13 00:54:33.509: INFO: Pod Quantity: 1 Status: Pending Nov 13 00:54:34.509: INFO: Pod Quantity: 1 Status: Pending Nov 13 00:54:35.509: INFO: Pod Quantity: 1 Status: Pending Nov 13 00:54:36.510: INFO: Pod Status: Running Nov 13 00:54:36.510: INFO: Creating service... Nov 13 00:54:36.516: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5702/pods/agnhost/proxy/some/path/with/DELETE Nov 13 00:54:36.603: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Nov 13 00:54:36.603: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5702/pods/agnhost/proxy/some/path/with/GET Nov 13 00:54:36.605: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Nov 13 00:54:36.605: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5702/pods/agnhost/proxy/some/path/with/HEAD Nov 13 00:54:36.608: INFO: http.Client request:HEAD | StatusCode:200 Nov 13 00:54:36.608: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5702/pods/agnhost/proxy/some/path/with/OPTIONS Nov 13 00:54:36.610: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Nov 13 00:54:36.610: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5702/pods/agnhost/proxy/some/path/with/PATCH Nov 13 00:54:36.613: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Nov 13 00:54:36.613: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5702/pods/agnhost/proxy/some/path/with/POST Nov 13 00:54:36.615: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Nov 13 00:54:36.615: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5702/pods/agnhost/proxy/some/path/with/PUT Nov 13 00:54:36.618: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Nov 13 00:54:36.618: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5702/services/test-service/proxy/some/path/with/DELETE Nov 13 00:54:36.621: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Nov 13 00:54:36.621: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5702/services/test-service/proxy/some/path/with/GET Nov 13 00:54:36.625: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Nov 13 00:54:36.625: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5702/services/test-service/proxy/some/path/with/HEAD Nov 13 00:54:36.628: INFO: http.Client request:HEAD | StatusCode:200 Nov 13 00:54:36.628: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5702/services/test-service/proxy/some/path/with/OPTIONS Nov 13 00:54:36.634: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Nov 13 00:54:36.634: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5702/services/test-service/proxy/some/path/with/PATCH Nov 13 00:54:36.637: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Nov 13 00:54:36.637: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5702/services/test-service/proxy/some/path/with/POST Nov 13 00:54:36.640: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Nov 13 00:54:36.640: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-5702/services/test-service/proxy/some/path/with/PUT Nov 13 00:54:36.642: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:36.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5702" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0} S ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:33.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: Nov 13 00:54:33.193: INFO: Waiting up to 5m0s for pod "test-pod-3046e442-6ee1-4956-a3bc-28dde00f8086" in namespace "svcaccounts-5792" to be "Succeeded or Failed" Nov 13 00:54:33.195: INFO: Pod "test-pod-3046e442-6ee1-4956-a3bc-28dde00f8086": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02941ms Nov 13 00:54:35.198: INFO: Pod "test-pod-3046e442-6ee1-4956-a3bc-28dde00f8086": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00564584s Nov 13 00:54:37.203: INFO: Pod "test-pod-3046e442-6ee1-4956-a3bc-28dde00f8086": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009996118s STEP: Saw pod success Nov 13 00:54:37.203: INFO: Pod "test-pod-3046e442-6ee1-4956-a3bc-28dde00f8086" satisfied condition "Succeeded or Failed" Nov 13 00:54:37.206: INFO: Trying to get logs from node node2 pod test-pod-3046e442-6ee1-4956-a3bc-28dde00f8086 container agnhost-container: STEP: delete the pod Nov 13 00:54:37.220: INFO: Waiting for pod test-pod-3046e442-6ee1-4956-a3bc-28dde00f8086 to disappear Nov 13 00:54:37.222: INFO: Pod test-pod-3046e442-6ee1-4956-a3bc-28dde00f8086 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:37.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5792" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":3,"skipped":60,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:53:51.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion W1113 00:53:51.760058 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 00:53:51.760: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 00:53:51.763: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Nov 13 00:54:03.794: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-1102 PodName:var-expansion-2cd4f28e-4801-4350-bae0-5caab6ecb511 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 00:54:03.794: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Nov 13 00:54:04.132: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-1102 PodName:var-expansion-2cd4f28e-4801-4350-bae0-5caab6ecb511 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 00:54:04.132: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Nov 13 00:54:04.722: INFO: Successfully updated pod "var-expansion-2cd4f28e-4801-4350-bae0-5caab6ecb511" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Nov 13 00:54:04.724: INFO: Deleting pod "var-expansion-2cd4f28e-4801-4350-bae0-5caab6ecb511" in namespace "var-expansion-1102" Nov 13 00:54:04.729: INFO: Wait up to 5m0s for pod "var-expansion-2cd4f28e-4801-4350-bae0-5caab6ecb511" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:38.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1102" for this suite. • [SLOW TEST:47.019 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:24.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:41.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3864" for this suite. • [SLOW TEST:17.060 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":3,"skipped":144,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:18.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Nov 13 00:54:18.489: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:20.492: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:22.491: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:24.497: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Nov 13 00:54:24.514: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:26.517: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:28.518: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:30.518: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Nov 13 00:54:30.525: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 13 00:54:30.528: INFO: Pod pod-with-prestop-http-hook still exists Nov 13 00:54:32.529: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 13 00:54:32.532: INFO: Pod pod-with-prestop-http-hook still exists Nov 13 00:54:34.530: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 13 00:54:34.532: INFO: Pod pod-with-prestop-http-hook still exists Nov 13 00:54:36.528: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 13 00:54:36.531: INFO: Pod pod-with-prestop-http-hook still exists Nov 13 00:54:38.530: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 13 00:54:38.533: INFO: Pod pod-with-prestop-http-hook still exists Nov 13 00:54:40.528: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 13 00:54:40.532: INFO: Pod pod-with-prestop-http-hook still exists Nov 13 00:54:42.529: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 13 00:54:42.531: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:42.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1843" for this suite. • [SLOW TEST:24.093 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":107,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:42.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:42.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-525" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":7,"skipped":115,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:36.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-e4c8ba5c-0c2a-40ad-91e6-43d5a5956d9a STEP: Creating the pod Nov 13 00:54:36.702: INFO: The status of Pod pod-configmaps-8fd52ddd-625e-4e40-a5f9-e9d23a4fdaa0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:38.706: INFO: The status of Pod pod-configmaps-8fd52ddd-625e-4e40-a5f9-e9d23a4fdaa0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:40.705: INFO: The status of Pod pod-configmaps-8fd52ddd-625e-4e40-a5f9-e9d23a4fdaa0 is Running (Ready = true) STEP: Updating configmap configmap-test-upd-e4c8ba5c-0c2a-40ad-91e6-43d5a5956d9a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:42.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4886" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:28.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:44.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6621" for this suite. • [SLOW TEST:16.111 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":4,"skipped":103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:44.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Nov 13 00:54:44.550: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9499 b28af51d-1836-4b84-b270-c9b6a009f924 60240 0 2021-11-13 00:54:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-11-13 00:54:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 13 00:54:44.550: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9499 b28af51d-1836-4b84-b270-c9b6a009f924 60241 0 2021-11-13 00:54:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-11-13 00:54:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Nov 13 00:54:44.571: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9499 b28af51d-1836-4b84-b270-c9b6a009f924 60242 0 2021-11-13 00:54:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-11-13 00:54:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 13 00:54:44.571: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9499 b28af51d-1836-4b84-b270-c9b6a009f924 60243 0 2021-11-13 00:54:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-11-13 00:54:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:44.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9499" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":5,"skipped":173,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:38.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 00:54:39.232: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 00:54:41.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361679, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361679, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361679, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361679, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 00:54:44.255: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:45.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9502" for this suite. STEP: Destroying namespace "webhook-9502-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.595 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:41.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Nov 13 00:54:41.758: INFO: Waiting up to 5m0s for pod "pod-7e235f83-51ae-4fff-8271-b0f6308f09b9" in namespace "emptydir-7019" to be "Succeeded or Failed" Nov 13 00:54:41.760: INFO: Pod "pod-7e235f83-51ae-4fff-8271-b0f6308f09b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121159ms Nov 13 00:54:43.763: INFO: Pod "pod-7e235f83-51ae-4fff-8271-b0f6308f09b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005565208s Nov 13 00:54:45.767: INFO: Pod "pod-7e235f83-51ae-4fff-8271-b0f6308f09b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009482424s STEP: Saw pod success Nov 13 00:54:45.767: INFO: Pod "pod-7e235f83-51ae-4fff-8271-b0f6308f09b9" satisfied condition "Succeeded or Failed" Nov 13 00:54:45.769: INFO: Trying to get logs from node node2 pod pod-7e235f83-51ae-4fff-8271-b0f6308f09b9 container test-container: STEP: delete the pod Nov 13 00:54:45.857: INFO: Waiting for pod pod-7e235f83-51ae-4fff-8271-b0f6308f09b9 to disappear Nov 13 00:54:45.859: INFO: Pod pod-7e235f83-51ae-4fff-8271-b0f6308f09b9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:45.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7019" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:42.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs Nov 13 00:54:42.638: INFO: Waiting up to 5m0s for pod "pod-dc6655dc-88a0-40c7-b302-4ebe532a7f1e" in namespace "emptydir-8211" to be "Succeeded or Failed" Nov 13 00:54:42.640: INFO: Pod "pod-dc6655dc-88a0-40c7-b302-4ebe532a7f1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375583ms Nov 13 00:54:44.643: INFO: Pod "pod-dc6655dc-88a0-40c7-b302-4ebe532a7f1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004810638s Nov 13 00:54:46.646: INFO: Pod "pod-dc6655dc-88a0-40c7-b302-4ebe532a7f1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007895652s STEP: Saw pod success Nov 13 00:54:46.646: INFO: Pod "pod-dc6655dc-88a0-40c7-b302-4ebe532a7f1e" satisfied condition "Succeeded or Failed" Nov 13 00:54:46.648: INFO: Trying to get logs from node node1 pod pod-dc6655dc-88a0-40c7-b302-4ebe532a7f1e container test-container: STEP: delete the pod Nov 13 00:54:46.660: INFO: Waiting for pod pod-dc6655dc-88a0-40c7-b302-4ebe532a7f1e to disappear Nov 13 00:54:46.662: INFO: Pod pod-dc6655dc-88a0-40c7-b302-4ebe532a7f1e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:46.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8211" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":118,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:42.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Nov 13 00:54:42.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9870 create -f -' Nov 13 00:54:43.201: INFO: stderr: "" Nov 13 00:54:43.201: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 13 00:54:43.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9870 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 13 00:54:43.394: INFO: stderr: "" Nov 13 00:54:43.394: INFO: stdout: "update-demo-nautilus-5vvs8 update-demo-nautilus-pzql7 " Nov 13 00:54:43.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9870 get pods update-demo-nautilus-5vvs8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 13 00:54:43.567: INFO: stderr: "" Nov 13 00:54:43.567: INFO: stdout: "" Nov 13 00:54:43.567: INFO: update-demo-nautilus-5vvs8 is created but not running Nov 13 00:54:48.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9870 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 13 00:54:48.746: INFO: stderr: "" Nov 13 00:54:48.746: INFO: stdout: "update-demo-nautilus-5vvs8 update-demo-nautilus-pzql7 " Nov 13 00:54:48.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9870 get pods update-demo-nautilus-5vvs8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 13 00:54:48.927: INFO: stderr: "" Nov 13 00:54:48.927: INFO: stdout: "true" Nov 13 00:54:48.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9870 get pods update-demo-nautilus-5vvs8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 13 00:54:49.094: INFO: stderr: "" Nov 13 00:54:49.094: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 13 00:54:49.094: INFO: validating pod update-demo-nautilus-5vvs8 Nov 13 00:54:49.098: INFO: got data: { "image": "nautilus.jpg" } Nov 13 00:54:49.098: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 13 00:54:49.098: INFO: update-demo-nautilus-5vvs8 is verified up and running Nov 13 00:54:49.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9870 get pods update-demo-nautilus-pzql7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 13 00:54:49.285: INFO: stderr: "" Nov 13 00:54:49.285: INFO: stdout: "true" Nov 13 00:54:49.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9870 get pods update-demo-nautilus-pzql7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 13 00:54:49.457: INFO: stderr: "" Nov 13 00:54:49.457: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Nov 13 00:54:49.457: INFO: validating pod update-demo-nautilus-pzql7 Nov 13 00:54:49.460: INFO: got data: { "image": "nautilus.jpg" } Nov 13 00:54:49.460: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 13 00:54:49.460: INFO: update-demo-nautilus-pzql7 is verified up and running STEP: using delete to clean up resources Nov 13 00:54:49.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9870 delete --grace-period=0 --force -f -' Nov 13 00:54:49.615: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 13 00:54:49.615: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Nov 13 00:54:49.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9870 get rc,svc -l name=update-demo --no-headers' Nov 13 00:54:49.835: INFO: stderr: "No resources found in kubectl-9870 namespace.\n" Nov 13 00:54:49.835: INFO: stdout: "" Nov 13 00:54:49.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9870 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 13 00:54:50.022: INFO: stderr: "" Nov 13 00:54:50.022: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:50.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9870" for this suite. • [SLOW TEST:7.254 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:53:51.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:53:52.053: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:52.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5433" for this suite. • [SLOW TEST:60.955 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:45.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:54:45.383: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:52.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4297" for this suite. • [SLOW TEST:7.637 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0} SSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":4,"skipped":25,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:50.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 00:54:50.066: INFO: Waiting up to 5m0s for pod "downwardapi-volume-061475a6-42a8-464a-b1e6-da82917da9bb" in namespace "projected-5275" to be "Succeeded or Failed" Nov 13 00:54:50.069: INFO: Pod "downwardapi-volume-061475a6-42a8-464a-b1e6-da82917da9bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.61477ms Nov 13 00:54:52.072: INFO: Pod "downwardapi-volume-061475a6-42a8-464a-b1e6-da82917da9bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00593569s Nov 13 00:54:54.076: INFO: Pod "downwardapi-volume-061475a6-42a8-464a-b1e6-da82917da9bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009592641s STEP: Saw pod success Nov 13 00:54:54.076: INFO: Pod "downwardapi-volume-061475a6-42a8-464a-b1e6-da82917da9bb" satisfied condition "Succeeded or Failed" Nov 13 00:54:54.078: INFO: Trying to get logs from node node1 pod downwardapi-volume-061475a6-42a8-464a-b1e6-da82917da9bb container client-container: STEP: delete the pod Nov 13 00:54:54.089: INFO: Waiting for pod downwardapi-volume-061475a6-42a8-464a-b1e6-da82917da9bb to disappear Nov 13 00:54:54.090: INFO: Pod downwardapi-volume-061475a6-42a8-464a-b1e6-da82917da9bb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:54.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5275" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":52,"failed":0} [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:28.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-rx7v STEP: Creating a pod to test atomic-volume-subpath Nov 13 00:54:28.410: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-rx7v" in namespace "subpath-2229" to be "Succeeded or Failed" Nov 13 00:54:28.413: INFO: Pod "pod-subpath-test-secret-rx7v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.688745ms Nov 13 00:54:30.416: INFO: Pod "pod-subpath-test-secret-rx7v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006524622s Nov 13 00:54:32.420: INFO: Pod "pod-subpath-test-secret-rx7v": Phase="Running", Reason="", readiness=true. Elapsed: 4.009681639s Nov 13 00:54:34.425: INFO: Pod "pod-subpath-test-secret-rx7v": Phase="Running", Reason="", readiness=true. Elapsed: 6.014741242s Nov 13 00:54:36.430: INFO: Pod "pod-subpath-test-secret-rx7v": Phase="Running", Reason="", readiness=true. Elapsed: 8.020087477s Nov 13 00:54:38.435: INFO: Pod "pod-subpath-test-secret-rx7v": Phase="Running", Reason="", readiness=true. Elapsed: 10.024791155s Nov 13 00:54:40.439: INFO: Pod "pod-subpath-test-secret-rx7v": Phase="Running", Reason="", readiness=true. Elapsed: 12.028937212s Nov 13 00:54:42.442: INFO: Pod "pod-subpath-test-secret-rx7v": Phase="Running", Reason="", readiness=true. Elapsed: 14.032206418s Nov 13 00:54:44.447: INFO: Pod "pod-subpath-test-secret-rx7v": Phase="Running", Reason="", readiness=true. Elapsed: 16.036944829s Nov 13 00:54:46.451: INFO: Pod "pod-subpath-test-secret-rx7v": Phase="Running", Reason="", readiness=true. Elapsed: 18.040770793s Nov 13 00:54:48.455: INFO: Pod "pod-subpath-test-secret-rx7v": Phase="Running", Reason="", readiness=true. Elapsed: 20.045561698s Nov 13 00:54:50.458: INFO: Pod "pod-subpath-test-secret-rx7v": Phase="Running", Reason="", readiness=true. Elapsed: 22.048408423s Nov 13 00:54:52.461: INFO: Pod "pod-subpath-test-secret-rx7v": Phase="Running", Reason="", readiness=true. Elapsed: 24.050707805s Nov 13 00:54:54.465: INFO: Pod "pod-subpath-test-secret-rx7v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.054656962s STEP: Saw pod success Nov 13 00:54:54.465: INFO: Pod "pod-subpath-test-secret-rx7v" satisfied condition "Succeeded or Failed" Nov 13 00:54:54.467: INFO: Trying to get logs from node node2 pod pod-subpath-test-secret-rx7v container test-container-subpath-secret-rx7v: STEP: delete the pod Nov 13 00:54:54.481: INFO: Waiting for pod pod-subpath-test-secret-rx7v to disappear Nov 13 00:54:54.492: INFO: Pod pod-subpath-test-secret-rx7v no longer exists STEP: Deleting pod pod-subpath-test-secret-rx7v Nov 13 00:54:54.492: INFO: Deleting pod "pod-subpath-test-secret-rx7v" in namespace "subpath-2229" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:54.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2229" for this suite. • [SLOW TEST:26.132 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":52,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:45.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Nov 13 00:54:47.987: INFO: running pods: 0 < 3 Nov 13 00:54:49.991: INFO: running pods: 0 < 3 Nov 13 00:54:51.991: INFO: running pods: 0 < 3 Nov 13 00:54:53.992: INFO: running pods: 1 < 3 Nov 13 00:54:55.992: INFO: running pods: 2 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:57.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6155" for this suite. • [SLOW TEST:12.074 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":5,"skipped":176,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:28.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-4539 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 00:54:28.463: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 00:54:28.497: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:30.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:32.501: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:34.502: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:54:36.502: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:54:38.501: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:54:40.501: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:54:42.501: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:54:44.502: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:54:46.501: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:54:48.502: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:54:50.500: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 00:54:50.505: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 00:54:58.526: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Nov 13 00:54:58.527: INFO: Breadth first check of 10.244.3.29 on host 10.10.190.207... Nov 13 00:54:58.529: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.212:9080/dial?request=hostname&protocol=http&host=10.244.3.29&port=8080&tries=1'] Namespace:pod-network-test-4539 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 00:54:58.529: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:54:58.629: INFO: Waiting for responses: map[] Nov 13 00:54:58.629: INFO: reached 10.244.3.29 after 0/1 tries Nov 13 00:54:58.629: INFO: Breadth first check of 10.244.4.201 on host 10.10.190.208... Nov 13 00:54:58.632: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.212:9080/dial?request=hostname&protocol=http&host=10.244.4.201&port=8080&tries=1'] Namespace:pod-network-test-4539 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 00:54:58.632: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:54:58.741: INFO: Waiting for responses: map[] Nov 13 00:54:58.741: INFO: reached 10.244.4.201 after 0/1 tries Nov 13 00:54:58.741: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:58.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4539" for this suite. • [SLOW TEST:30.312 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":60,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:46.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:59.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-608" for this suite. • [SLOW TEST:13.100 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":9,"skipped":124,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:58.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:54:58.804: INFO: Got root ca configmap in namespace "svcaccounts-6843" Nov 13 00:54:58.808: INFO: Deleted root ca configmap in namespace "svcaccounts-6843" STEP: waiting for a new root ca configmap created Nov 13 00:54:59.310: INFO: Recreated root ca configmap in namespace "svcaccounts-6843" Nov 13 00:54:59.313: INFO: Updated root ca configmap in namespace "svcaccounts-6843" STEP: waiting for the root ca configmap reconciled Nov 13 00:54:59.817: INFO: Reconciled root ca configmap in namespace "svcaccounts-6843" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:54:59.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6843" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":4,"skipped":72,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:53.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Nov 13 00:54:59.571: INFO: Successfully updated pod "adopt-release-cz4tq" STEP: Checking that the Job readopts the Pod Nov 13 00:54:59.571: INFO: Waiting up to 15m0s for pod "adopt-release-cz4tq" in namespace "job-7333" to be "adopted" Nov 13 00:54:59.573: INFO: Pod "adopt-release-cz4tq": Phase="Running", Reason="", readiness=true. Elapsed: 2.186344ms Nov 13 00:55:01.577: INFO: Pod "adopt-release-cz4tq": Phase="Running", Reason="", readiness=true. Elapsed: 2.00579747s Nov 13 00:55:01.577: INFO: Pod "adopt-release-cz4tq" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Nov 13 00:55:02.087: INFO: Successfully updated pod "adopt-release-cz4tq" STEP: Checking that the Job releases the Pod Nov 13 00:55:02.087: INFO: Waiting up to 15m0s for pod "adopt-release-cz4tq" in namespace "job-7333" to be "released" Nov 13 00:55:02.089: INFO: Pod "adopt-release-cz4tq": Phase="Running", Reason="", readiness=true. Elapsed: 2.366939ms Nov 13 00:55:04.097: INFO: Pod "adopt-release-cz4tq": Phase="Running", Reason="", readiness=true. Elapsed: 2.010425875s Nov 13 00:55:04.097: INFO: Pod "adopt-release-cz4tq" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:04.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7333" for this suite. • [SLOW TEST:11.076 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:58.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created Nov 13 00:54:58.061: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:00.064: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:02.065: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:04.065: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Nov 13 00:55:05.080: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:06.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9389" for this suite. • [SLOW TEST:8.078 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":6,"skipped":187,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:06.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:06.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5171" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":7,"skipped":204,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:54.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6975 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6975;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6975 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6975;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6975.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6975.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6975.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6975.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6975.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6975.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6975.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6975.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6975.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6975.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6975.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6975.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6975.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 100.4.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.4.100_udp@PTR;check="$$(dig +tcp +noall +answer +search 100.4.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.4.100_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6975 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6975;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6975 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6975;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6975.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6975.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6975.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6975.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6975.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6975.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6975.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6975.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6975.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6975.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6975.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6975.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6975.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 100.4.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.4.100_udp@PTR;check="$$(dig +tcp +noall +answer +search 100.4.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.4.100_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 13 00:55:02.216: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.218: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.221: INFO: Unable to read wheezy_udp@dns-test-service.dns-6975 from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.224: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6975 from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.226: INFO: Unable to read wheezy_udp@dns-test-service.dns-6975.svc from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.229: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6975.svc from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.231: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6975.svc from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.235: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6975.svc from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.253: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.255: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.258: INFO: Unable to read jessie_udp@dns-test-service.dns-6975 from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.261: INFO: Unable to read jessie_tcp@dns-test-service.dns-6975 from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.264: INFO: Unable to read jessie_udp@dns-test-service.dns-6975.svc from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.267: INFO: Unable to read jessie_tcp@dns-test-service.dns-6975.svc from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.269: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6975.svc from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.272: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6975.svc from pod dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d: the server could not find the requested resource (get pods dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d) Nov 13 00:55:02.286: INFO: Lookups using dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6975 wheezy_tcp@dns-test-service.dns-6975 wheezy_udp@dns-test-service.dns-6975.svc wheezy_tcp@dns-test-service.dns-6975.svc wheezy_udp@_http._tcp.dns-test-service.dns-6975.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6975.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6975 jessie_tcp@dns-test-service.dns-6975 jessie_udp@dns-test-service.dns-6975.svc jessie_tcp@dns-test-service.dns-6975.svc jessie_udp@_http._tcp.dns-test-service.dns-6975.svc jessie_tcp@_http._tcp.dns-test-service.dns-6975.svc] Nov 13 00:55:07.357: INFO: DNS probes using dns-6975/dns-test-9b739ec2-9c27-408f-8af9-764d1b0c0c4d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:07.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6975" for this suite. • [SLOW TEST:13.239 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":48,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:07.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:07.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8374" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":7,"skipped":51,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:07.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events Nov 13 00:55:07.544: INFO: created test-event-1 Nov 13 00:55:07.547: INFO: created test-event-2 Nov 13 00:55:07.549: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Nov 13 00:55:07.552: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Nov 13 00:55:07.565: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:07.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8149" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":8,"skipped":61,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:59.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-0a5b719d-eed6-4cd7-a155-b1faf2b1c53b STEP: Creating secret with name s-test-opt-upd-49affbac-e2ce-4e46-9699-090f872c0ee8 STEP: Creating the pod Nov 13 00:54:59.879: INFO: The status of Pod pod-secrets-60470e64-95ea-46eb-bd53-4fa7b12ab4ee is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:01.883: INFO: The status of Pod pod-secrets-60470e64-95ea-46eb-bd53-4fa7b12ab4ee is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:03.885: INFO: The status of Pod pod-secrets-60470e64-95ea-46eb-bd53-4fa7b12ab4ee is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:05.884: INFO: The status of Pod pod-secrets-60470e64-95ea-46eb-bd53-4fa7b12ab4ee is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:07.886: INFO: The status of Pod pod-secrets-60470e64-95ea-46eb-bd53-4fa7b12ab4ee is Running (Ready = true) STEP: Deleting secret s-test-opt-del-0a5b719d-eed6-4cd7-a155-b1faf2b1c53b STEP: Updating secret s-test-opt-upd-49affbac-e2ce-4e46-9699-090f872c0ee8 STEP: Creating secret with name s-test-opt-create-1847f4a4-b1cf-49cf-a079-bebdcbf7d001 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:09.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1626" for this suite. • [SLOW TEST:10.116 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":75,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:54.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Nov 13 00:54:54.554: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:56.558: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:58.559: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:00.558: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Nov 13 00:55:00.573: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:02.575: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:04.577: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:06.578: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:08.579: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Nov 13 00:55:08.587: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 13 00:55:08.590: INFO: Pod pod-with-prestop-exec-hook still exists Nov 13 00:55:10.592: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 13 00:55:10.595: INFO: Pod pod-with-prestop-exec-hook still exists Nov 13 00:55:12.594: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 13 00:55:12.596: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:12.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8353" for this suite. • [SLOW TEST:18.091 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":58,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:52.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-5xlp STEP: Creating a pod to test atomic-volume-subpath Nov 13 00:54:52.891: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5xlp" in namespace "subpath-1510" to be "Succeeded or Failed" Nov 13 00:54:52.894: INFO: Pod "pod-subpath-test-configmap-5xlp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.846466ms Nov 13 00:54:54.897: INFO: Pod "pod-subpath-test-configmap-5xlp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006537069s Nov 13 00:54:56.901: INFO: Pod "pod-subpath-test-configmap-5xlp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009567713s Nov 13 00:54:58.904: INFO: Pod "pod-subpath-test-configmap-5xlp": Phase="Running", Reason="", readiness=true. Elapsed: 6.013513754s Nov 13 00:55:00.907: INFO: Pod "pod-subpath-test-configmap-5xlp": Phase="Running", Reason="", readiness=true. Elapsed: 8.016556324s Nov 13 00:55:02.911: INFO: Pod "pod-subpath-test-configmap-5xlp": Phase="Running", Reason="", readiness=true. Elapsed: 10.020533704s Nov 13 00:55:04.915: INFO: Pod "pod-subpath-test-configmap-5xlp": Phase="Running", Reason="", readiness=true. Elapsed: 12.024086863s Nov 13 00:55:06.918: INFO: Pod "pod-subpath-test-configmap-5xlp": Phase="Running", Reason="", readiness=true. Elapsed: 14.026761434s Nov 13 00:55:08.921: INFO: Pod "pod-subpath-test-configmap-5xlp": Phase="Running", Reason="", readiness=true. Elapsed: 16.030339719s Nov 13 00:55:10.925: INFO: Pod "pod-subpath-test-configmap-5xlp": Phase="Running", Reason="", readiness=true. Elapsed: 18.034208332s Nov 13 00:55:12.930: INFO: Pod "pod-subpath-test-configmap-5xlp": Phase="Running", Reason="", readiness=true. Elapsed: 20.038977667s Nov 13 00:55:14.933: INFO: Pod "pod-subpath-test-configmap-5xlp": Phase="Running", Reason="", readiness=true. Elapsed: 22.042441904s Nov 13 00:55:16.936: INFO: Pod "pod-subpath-test-configmap-5xlp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.045079387s STEP: Saw pod success Nov 13 00:55:16.936: INFO: Pod "pod-subpath-test-configmap-5xlp" satisfied condition "Succeeded or Failed" Nov 13 00:55:16.938: INFO: Trying to get logs from node node1 pod pod-subpath-test-configmap-5xlp container test-container-subpath-configmap-5xlp: STEP: delete the pod Nov 13 00:55:16.962: INFO: Waiting for pod pod-subpath-test-configmap-5xlp to disappear Nov 13 00:55:16.964: INFO: Pod pod-subpath-test-configmap-5xlp no longer exists STEP: Deleting pod pod-subpath-test-configmap-5xlp Nov 13 00:55:16.964: INFO: Deleting pod "pod-subpath-test-configmap-5xlp" in namespace "subpath-1510" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:16.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1510" for this suite. • [SLOW TEST:24.120 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:07.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:55:07.660: INFO: The status of Pod busybox-readonly-fs58d8f3b0-abe3-43e9-8e0e-c378560e163f is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:09.662: INFO: The status of Pod busybox-readonly-fs58d8f3b0-abe3-43e9-8e0e-c378560e163f is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:11.663: INFO: The status of Pod busybox-readonly-fs58d8f3b0-abe3-43e9-8e0e-c378560e163f is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:13.664: INFO: The status of Pod busybox-readonly-fs58d8f3b0-abe3-43e9-8e0e-c378560e163f is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:15.663: INFO: The status of Pod busybox-readonly-fs58d8f3b0-abe3-43e9-8e0e-c378560e163f is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:17.664: INFO: The status of Pod busybox-readonly-fs58d8f3b0-abe3-43e9-8e0e-c378560e163f is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:17.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7617" for this suite. • [SLOW TEST:10.057 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a read only busybox container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":82,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:04.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Nov 13 00:55:04.832: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Nov 13 00:55:06.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361704, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361704, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361704, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361704, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:55:08.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361704, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361704, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361704, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361704, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 00:55:11.853: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:55:11.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:19.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7526" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:15.850 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":5,"skipped":30,"failed":0} S ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:44.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Nov 13 00:54:44.630: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:46.633: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:48.635: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:50.635: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.10.190.208 on the node which pod1 resides and expect scheduled Nov 13 00:54:50.646: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:52.650: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:54.649: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:56.649: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:54:58.654: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.10.190.208 but use UDP protocol on the node which pod2 resides Nov 13 00:54:58.666: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:00.669: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:02.672: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:04.676: INFO: The status of Pod pod3 is Running (Ready = true) Nov 13 00:55:04.688: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:06.693: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:08.692: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:10.693: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:12.692: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:14.692: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:16.691: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Nov 13 00:55:16.693: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.10.190.208 http://127.0.0.1:54323/hostname] Namespace:hostport-4681 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 00:55:16.693: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.208, port: 54323 Nov 13 00:55:16.794: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.10.190.208:54323/hostname] Namespace:hostport-4681 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 00:55:16.794: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.208, port: 54323 UDP Nov 13 00:55:16.887: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.10.190.208 54323] Namespace:hostport-4681 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 00:55:16.887: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:22.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-4681" for this suite. • [SLOW TEST:37.476 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:17.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-ca1649ec-8e6d-4576-9980-7ebbfa3f97de STEP: Creating a pod to test consume secrets Nov 13 00:55:17.069: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8e3cd01a-721b-4760-954a-57c7e7d5b90f" in namespace "projected-1728" to be "Succeeded or Failed" Nov 13 00:55:17.074: INFO: Pod "pod-projected-secrets-8e3cd01a-721b-4760-954a-57c7e7d5b90f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.590664ms Nov 13 00:55:19.076: INFO: Pod "pod-projected-secrets-8e3cd01a-721b-4760-954a-57c7e7d5b90f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007092666s Nov 13 00:55:21.080: INFO: Pod "pod-projected-secrets-8e3cd01a-721b-4760-954a-57c7e7d5b90f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010422227s Nov 13 00:55:23.086: INFO: Pod "pod-projected-secrets-8e3cd01a-721b-4760-954a-57c7e7d5b90f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016521357s STEP: Saw pod success Nov 13 00:55:23.086: INFO: Pod "pod-projected-secrets-8e3cd01a-721b-4760-954a-57c7e7d5b90f" satisfied condition "Succeeded or Failed" Nov 13 00:55:23.089: INFO: Trying to get logs from node node2 pod pod-projected-secrets-8e3cd01a-721b-4760-954a-57c7e7d5b90f container projected-secret-volume-test: STEP: delete the pod Nov 13 00:55:23.101: INFO: Waiting for pod pod-projected-secrets-8e3cd01a-721b-4760-954a-57c7e7d5b90f to disappear Nov 13 00:55:23.103: INFO: Pod pod-projected-secrets-8e3cd01a-721b-4760-954a-57c7e7d5b90f no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:23.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1728" for this suite. • [SLOW TEST:6.086 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:12.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:23.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3282" for this suite. • [SLOW TEST:11.070 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":5,"skipped":69,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:17.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 13 00:55:23.767: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:23.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8585" for this suite. • [SLOW TEST:6.071 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":95,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:59.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-8261 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 00:54:59.822: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 00:54:59.853: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:01.856: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:03.860: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:05.858: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:55:07.856: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:55:09.857: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:55:11.857: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:55:13.857: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:55:15.859: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:55:17.859: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:55:19.859: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 00:55:19.863: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 00:55:21.868: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 00:55:23.867: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 00:55:29.903: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Nov 13 00:55:29.903: INFO: Going to poll 10.244.3.37 on port 8080 at least 0 times, with a maximum of 34 tries before failing Nov 13 00:55:29.905: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.37:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8261 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 00:55:29.906: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:55:30.185: INFO: Found all 1 expected endpoints: [netserver-0] Nov 13 00:55:30.185: INFO: Going to poll 10.244.4.218 on port 8080 at least 0 times, with a maximum of 34 tries before failing Nov 13 00:55:30.188: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.218:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8261 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 00:55:30.188: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:55:30.688: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:30.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8261" for this suite. • [SLOW TEST:30.897 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":130,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:23.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:55:23.163: INFO: Creating ReplicaSet my-hostname-basic-51f49c95-2d38-4ea0-b62f-372be570707c Nov 13 00:55:23.169: INFO: Pod name my-hostname-basic-51f49c95-2d38-4ea0-b62f-372be570707c: Found 0 pods out of 1 Nov 13 00:55:28.173: INFO: Pod name my-hostname-basic-51f49c95-2d38-4ea0-b62f-372be570707c: Found 1 pods out of 1 Nov 13 00:55:28.173: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-51f49c95-2d38-4ea0-b62f-372be570707c" is running Nov 13 00:55:30.178: INFO: Pod "my-hostname-basic-51f49c95-2d38-4ea0-b62f-372be570707c-4psnt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 00:55:23 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 00:55:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-51f49c95-2d38-4ea0-b62f-372be570707c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 00:55:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-51f49c95-2d38-4ea0-b62f-372be570707c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 00:55:23 +0000 UTC Reason: Message:}]) Nov 13 00:55:30.179: INFO: Trying to dial the pod Nov 13 00:55:35.189: INFO: Controller my-hostname-basic-51f49c95-2d38-4ea0-b62f-372be570707c: Got expected result from replica 1 [my-hostname-basic-51f49c95-2d38-4ea0-b62f-372be570707c-4psnt]: "my-hostname-basic-51f49c95-2d38-4ea0-b62f-372be570707c-4psnt", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:35.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9190" for this suite. • [SLOW TEST:12.059 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":5,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:30.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-1465cc30-db3d-4066-9853-b1fffa27e3a8 Nov 13 00:55:30.783: INFO: Pod name my-hostname-basic-1465cc30-db3d-4066-9853-b1fffa27e3a8: Found 0 pods out of 1 Nov 13 00:55:35.788: INFO: Pod name my-hostname-basic-1465cc30-db3d-4066-9853-b1fffa27e3a8: Found 1 pods out of 1 Nov 13 00:55:35.788: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1465cc30-db3d-4066-9853-b1fffa27e3a8" are running Nov 13 00:55:35.793: INFO: Pod "my-hostname-basic-1465cc30-db3d-4066-9853-b1fffa27e3a8-kmvsc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 00:55:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 00:55:34 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 00:55:34 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 00:55:30 +0000 UTC Reason: Message:}]) Nov 13 00:55:35.794: INFO: Trying to dial the pod Nov 13 00:55:40.803: INFO: Controller my-hostname-basic-1465cc30-db3d-4066-9853-b1fffa27e3a8: Got expected result from replica 1 [my-hostname-basic-1465cc30-db3d-4066-9853-b1fffa27e3a8-kmvsc]: "my-hostname-basic-1465cc30-db3d-4066-9853-b1fffa27e3a8-kmvsc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:40.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6784" for this suite. • [SLOW TEST:10.059 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":11,"skipped":152,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:35.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Nov 13 00:55:35.353: INFO: Waiting up to 5m0s for pod "pod-b053e42d-b72b-4b6c-acf2-7d427c3023a8" in namespace "emptydir-1523" to be "Succeeded or Failed" Nov 13 00:55:35.359: INFO: Pod "pod-b053e42d-b72b-4b6c-acf2-7d427c3023a8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.706182ms Nov 13 00:55:37.362: INFO: Pod "pod-b053e42d-b72b-4b6c-acf2-7d427c3023a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009378115s Nov 13 00:55:39.366: INFO: Pod "pod-b053e42d-b72b-4b6c-acf2-7d427c3023a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013496384s Nov 13 00:55:41.371: INFO: Pod "pod-b053e42d-b72b-4b6c-acf2-7d427c3023a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01767671s STEP: Saw pod success Nov 13 00:55:41.371: INFO: Pod "pod-b053e42d-b72b-4b6c-acf2-7d427c3023a8" satisfied condition "Succeeded or Failed" Nov 13 00:55:41.373: INFO: Trying to get logs from node node2 pod pod-b053e42d-b72b-4b6c-acf2-7d427c3023a8 container test-container: STEP: delete the pod Nov 13 00:55:41.385: INFO: Waiting for pod pod-b053e42d-b72b-4b6c-acf2-7d427c3023a8 to disappear Nov 13 00:55:41.387: INFO: Pod pod-b053e42d-b72b-4b6c-acf2-7d427c3023a8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:41.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1523" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:06.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:43.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9293" for this suite. • [SLOW TEST:37.266 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:40.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:55:40.851: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-14cf40fb-1395-476e-8896-09f920c71ee9" in namespace "security-context-test-8761" to be "Succeeded or Failed" Nov 13 00:55:40.853: INFO: Pod "alpine-nnp-false-14cf40fb-1395-476e-8896-09f920c71ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.966342ms Nov 13 00:55:42.855: INFO: Pod "alpine-nnp-false-14cf40fb-1395-476e-8896-09f920c71ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004616343s Nov 13 00:55:44.861: INFO: Pod "alpine-nnp-false-14cf40fb-1395-476e-8896-09f920c71ee9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010146845s Nov 13 00:55:44.861: INFO: Pod "alpine-nnp-false-14cf40fb-1395-476e-8896-09f920c71ee9" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:44.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8761" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:44.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token Nov 13 00:55:45.540: INFO: created pod pod-service-account-defaultsa Nov 13 00:55:45.540: INFO: pod pod-service-account-defaultsa service account token volume mount: true Nov 13 00:55:45.548: INFO: created pod pod-service-account-mountsa Nov 13 00:55:45.548: INFO: pod pod-service-account-mountsa service account token volume mount: true Nov 13 00:55:45.558: INFO: created pod pod-service-account-nomountsa Nov 13 00:55:45.558: INFO: pod pod-service-account-nomountsa service account token volume mount: false Nov 13 00:55:45.567: INFO: created pod pod-service-account-defaultsa-mountspec Nov 13 00:55:45.567: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Nov 13 00:55:45.577: INFO: created pod pod-service-account-mountsa-mountspec Nov 13 00:55:45.577: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Nov 13 00:55:45.586: INFO: created pod pod-service-account-nomountsa-mountspec Nov 13 00:55:45.586: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Nov 13 00:55:45.594: INFO: created pod pod-service-account-defaultsa-nomountspec Nov 13 00:55:45.595: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Nov 13 00:55:45.603: INFO: created pod pod-service-account-mountsa-nomountspec Nov 13 00:55:45.603: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Nov 13 00:55:45.611: INFO: created pod pod-service-account-nomountsa-nomountspec Nov 13 00:55:45.611: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:45.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3842" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":13,"skipped":210,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:43.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-4e0ad8ac-491b-48ce-9eab-f9b2a0f166da STEP: Creating a pod to test consume configMaps Nov 13 00:55:43.617: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-79066f69-b0bd-430e-87c0-da0962344b61" in namespace "projected-7130" to be "Succeeded or Failed" Nov 13 00:55:43.619: INFO: Pod "pod-projected-configmaps-79066f69-b0bd-430e-87c0-da0962344b61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.418026ms Nov 13 00:55:45.623: INFO: Pod "pod-projected-configmaps-79066f69-b0bd-430e-87c0-da0962344b61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00597459s Nov 13 00:55:47.627: INFO: Pod "pod-projected-configmaps-79066f69-b0bd-430e-87c0-da0962344b61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010561277s Nov 13 00:55:49.630: INFO: Pod "pod-projected-configmaps-79066f69-b0bd-430e-87c0-da0962344b61": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013179659s Nov 13 00:55:51.634: INFO: Pod "pod-projected-configmaps-79066f69-b0bd-430e-87c0-da0962344b61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016707977s STEP: Saw pod success Nov 13 00:55:51.634: INFO: Pod "pod-projected-configmaps-79066f69-b0bd-430e-87c0-da0962344b61" satisfied condition "Succeeded or Failed" Nov 13 00:55:51.637: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-79066f69-b0bd-430e-87c0-da0962344b61 container agnhost-container: STEP: delete the pod Nov 13 00:55:51.660: INFO: Waiting for pod pod-projected-configmaps-79066f69-b0bd-430e-87c0-da0962344b61 to disappear Nov 13 00:55:51.663: INFO: Pod pod-projected-configmaps-79066f69-b0bd-430e-87c0-da0962344b61 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:51.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7130" for this suite. • [SLOW TEST:8.092 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:22.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted Nov 13 00:55:47.229: INFO: EndpointSlice for Service endpointslice-4873/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:57.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-4873" for this suite. • [SLOW TEST:35.122 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":7,"skipped":203,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:45.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Nov 13 00:55:45.687: INFO: The status of Pod annotationupdate62b8d152-6ff7-4b37-acab-e498a95301cd is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:47.691: INFO: The status of Pod annotationupdate62b8d152-6ff7-4b37-acab-e498a95301cd is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:49.693: INFO: The status of Pod annotationupdate62b8d152-6ff7-4b37-acab-e498a95301cd is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:51.690: INFO: The status of Pod annotationupdate62b8d152-6ff7-4b37-acab-e498a95301cd is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:53.692: INFO: The status of Pod annotationupdate62b8d152-6ff7-4b37-acab-e498a95301cd is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:55.691: INFO: The status of Pod annotationupdate62b8d152-6ff7-4b37-acab-e498a95301cd is Running (Ready = true) Nov 13 00:55:56.219: INFO: Successfully updated pod "annotationupdate62b8d152-6ff7-4b37-acab-e498a95301cd" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:58.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8114" for this suite. • [SLOW TEST:12.677 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":264,"failed":0} [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:51.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:55:59.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-6683" for this suite. • [SLOW TEST:8.058 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":10,"skipped":264,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:57.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-2451/configmap-test-a62f330a-8eb3-43c1-8069-bfff99a138ab STEP: Creating a pod to test consume configMaps Nov 13 00:55:57.296: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf251bf6-a656-4c89-ae9b-7388420a4ca2" in namespace "configmap-2451" to be "Succeeded or Failed" Nov 13 00:55:57.298: INFO: Pod "pod-configmaps-bf251bf6-a656-4c89-ae9b-7388420a4ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146269ms Nov 13 00:55:59.303: INFO: Pod "pod-configmaps-bf251bf6-a656-4c89-ae9b-7388420a4ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006973353s Nov 13 00:56:01.308: INFO: Pod "pod-configmaps-bf251bf6-a656-4c89-ae9b-7388420a4ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011477898s Nov 13 00:56:03.312: INFO: Pod "pod-configmaps-bf251bf6-a656-4c89-ae9b-7388420a4ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016090499s Nov 13 00:56:05.315: INFO: Pod "pod-configmaps-bf251bf6-a656-4c89-ae9b-7388420a4ca2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019147114s STEP: Saw pod success Nov 13 00:56:05.315: INFO: Pod "pod-configmaps-bf251bf6-a656-4c89-ae9b-7388420a4ca2" satisfied condition "Succeeded or Failed" Nov 13 00:56:05.317: INFO: Trying to get logs from node node2 pod pod-configmaps-bf251bf6-a656-4c89-ae9b-7388420a4ca2 container env-test: STEP: delete the pod Nov 13 00:56:05.328: INFO: Waiting for pod pod-configmaps-bf251bf6-a656-4c89-ae9b-7388420a4ca2 to disappear Nov 13 00:56:05.330: INFO: Pod pod-configmaps-bf251bf6-a656-4c89-ae9b-7388420a4ca2 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:05.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2451" for this suite. • [SLOW TEST:8.077 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":207,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:58.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 00:55:59.013: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 00:56:01.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361759, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361759, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361759, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361759, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:56:03.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361759, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361759, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361759, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361759, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 00:56:06.033: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Nov 13 00:56:06.046: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:06.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9086" for this suite. STEP: Destroying namespace "webhook-9086-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.583 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":15,"skipped":308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:41.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3053.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3053.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3053.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3053.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 13 00:55:51.533: INFO: DNS probes using dns-test-3240053f-3a38-4295-9237-0792e1b7ebe8 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3053.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3053.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3053.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3053.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 13 00:56:01.647: INFO: DNS probes using dns-test-9740de05-82d3-4ded-8203-8fe69371e273 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3053.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3053.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3053.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3053.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 13 00:56:07.715: INFO: DNS probes using dns-test-9fbefee2-6189-4f81-a4c3-dfdf94c4378b succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:07.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3053" for this suite. • [SLOW TEST:26.251 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":7,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:05.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 00:56:05.409: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8b107de-f8de-452b-a27a-52bffb230b72" in namespace "projected-3797" to be "Succeeded or Failed" Nov 13 00:56:05.411: INFO: Pod "downwardapi-volume-b8b107de-f8de-452b-a27a-52bffb230b72": Phase="Pending", Reason="", readiness=false. Elapsed: 1.980077ms Nov 13 00:56:07.415: INFO: Pod "downwardapi-volume-b8b107de-f8de-452b-a27a-52bffb230b72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005964582s Nov 13 00:56:09.422: INFO: Pod "downwardapi-volume-b8b107de-f8de-452b-a27a-52bffb230b72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012707137s STEP: Saw pod success Nov 13 00:56:09.422: INFO: Pod "downwardapi-volume-b8b107de-f8de-452b-a27a-52bffb230b72" satisfied condition "Succeeded or Failed" Nov 13 00:56:09.424: INFO: Trying to get logs from node node1 pod downwardapi-volume-b8b107de-f8de-452b-a27a-52bffb230b72 container client-container: STEP: delete the pod Nov 13 00:56:09.438: INFO: Waiting for pod downwardapi-volume-b8b107de-f8de-452b-a27a-52bffb230b72 to disappear Nov 13 00:56:09.440: INFO: Pod downwardapi-volume-b8b107de-f8de-452b-a27a-52bffb230b72 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:09.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3797" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":223,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:20.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-6234 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6234 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6234 Nov 13 00:54:20.707: INFO: Found 0 stateful pods, waiting for 1 Nov 13 00:54:30.709: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 13 00:54:40.709: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Nov 13 00:54:40.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6234 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 13 00:54:40.958: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 13 00:54:40.958: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 13 00:54:40.958: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 13 00:54:40.961: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 13 00:54:50.966: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 13 00:54:50.966: INFO: Waiting for statefulset status.replicas updated to 0 Nov 13 00:54:50.977: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999367s Nov 13 00:54:51.980: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997520889s Nov 13 00:54:52.983: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.994166366s Nov 13 00:54:53.986: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.991423951s Nov 13 00:54:54.989: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.988207926s Nov 13 00:54:55.992: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.985167298s Nov 13 00:54:56.995: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.982222527s Nov 13 00:54:57.998: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.979034346s Nov 13 00:54:59.000: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.97654716s Nov 13 00:55:00.004: INFO: Verifying statefulset ss doesn't scale past 1 for another 973.996481ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6234 Nov 13 00:55:01.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6234 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 13 00:55:01.502: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 13 00:55:01.502: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 13 00:55:01.502: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 13 00:55:01.504: INFO: Found 1 stateful pods, waiting for 3 Nov 13 00:55:11.510: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 13 00:55:11.510: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 13 00:55:11.510: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Nov 13 00:55:21.509: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 13 00:55:21.509: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 13 00:55:21.509: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Nov 13 00:55:21.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6234 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 13 00:55:21.769: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 13 00:55:21.769: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 13 00:55:21.769: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 13 00:55:21.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6234 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 13 00:55:22.242: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 13 00:55:22.242: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 13 00:55:22.242: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 13 00:55:22.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6234 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 13 00:55:22.498: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 13 00:55:22.498: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 13 00:55:22.498: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 13 00:55:22.498: INFO: Waiting for statefulset status.replicas updated to 0 Nov 13 00:55:22.501: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Nov 13 00:55:32.507: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 13 00:55:32.507: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Nov 13 00:55:32.507: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Nov 13 00:55:32.516: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999448s Nov 13 00:55:33.520: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995756449s Nov 13 00:55:34.524: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992109639s Nov 13 00:55:35.528: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988066261s Nov 13 00:55:36.533: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.984008829s Nov 13 00:55:37.537: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.979631809s Nov 13 00:55:38.541: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.975526306s Nov 13 00:55:39.546: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.971023892s Nov 13 00:55:40.551: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.965811775s Nov 13 00:55:41.554: INFO: Verifying statefulset ss doesn't scale past 3 for another 961.967238ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6234 Nov 13 00:55:42.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6234 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 13 00:55:42.876: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 13 00:55:42.876: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 13 00:55:42.876: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 13 00:55:42.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6234 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 13 00:55:43.223: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 13 00:55:43.223: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 13 00:55:43.223: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 13 00:55:43.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6234 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 13 00:55:43.497: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 13 00:55:43.497: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 13 00:55:43.497: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 13 00:55:43.497: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Nov 13 00:56:13.511: INFO: Deleting all statefulset in ns statefulset-6234 Nov 13 00:56:13.515: INFO: Scaling statefulset ss to 0 Nov 13 00:56:13.524: INFO: Waiting for statefulset status.replicas updated to 0 Nov 13 00:56:13.526: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:13.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6234" for this suite. • [SLOW TEST:112.882 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":5,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:09.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:56:09.494: INFO: The status of Pod server-envvars-af30f1e0-4b96-4cb4-b7f1-6c42b97200c5 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:56:11.498: INFO: The status of Pod server-envvars-af30f1e0-4b96-4cb4-b7f1-6c42b97200c5 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:56:13.500: INFO: The status of Pod server-envvars-af30f1e0-4b96-4cb4-b7f1-6c42b97200c5 is Running (Ready = true) Nov 13 00:56:13.519: INFO: Waiting up to 5m0s for pod "client-envvars-388ee92f-e16f-4eb4-a550-32d9a4e1e114" in namespace "pods-7996" to be "Succeeded or Failed" Nov 13 00:56:13.521: INFO: Pod "client-envvars-388ee92f-e16f-4eb4-a550-32d9a4e1e114": Phase="Pending", Reason="", readiness=false. Elapsed: 1.867867ms Nov 13 00:56:15.525: INFO: Pod "client-envvars-388ee92f-e16f-4eb4-a550-32d9a4e1e114": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005507326s Nov 13 00:56:17.529: INFO: Pod "client-envvars-388ee92f-e16f-4eb4-a550-32d9a4e1e114": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010108465s STEP: Saw pod success Nov 13 00:56:17.529: INFO: Pod "client-envvars-388ee92f-e16f-4eb4-a550-32d9a4e1e114" satisfied condition "Succeeded or Failed" Nov 13 00:56:17.532: INFO: Trying to get logs from node node2 pod client-envvars-388ee92f-e16f-4eb4-a550-32d9a4e1e114 container env3cont: STEP: delete the pod Nov 13 00:56:17.547: INFO: Waiting for pod client-envvars-388ee92f-e16f-4eb4-a550-32d9a4e1e114 to disappear Nov 13 00:56:17.550: INFO: Pod client-envvars-388ee92f-e16f-4eb4-a550-32d9a4e1e114 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:17.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7996" for this suite. • [SLOW TEST:8.102 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":225,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:17.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 13 00:56:18.242: INFO: starting watch STEP: patching STEP: updating Nov 13 00:56:18.250: INFO: waiting for watch events with expected annotations Nov 13 00:56:18.250: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:18.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-4604" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":11,"skipped":244,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:18.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:56:18.356: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Nov 13 00:56:20.383: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:21.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5513" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":12,"skipped":259,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:21.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version Nov 13 00:56:21.435: INFO: Major version: 1 STEP: Confirm minor version Nov 13 00:56:21.435: INFO: cleanMinorVersion: 21 Nov 13 00:56:21.435: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:21.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-530" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":13,"skipped":262,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:13.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Nov 13 00:56:13.625: INFO: The status of Pod annotationupdated9acc5c0-ab70-4ad3-ab82-50da8bea6a42 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:56:15.628: INFO: The status of Pod annotationupdated9acc5c0-ab70-4ad3-ab82-50da8bea6a42 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:56:17.631: INFO: The status of Pod annotationupdated9acc5c0-ab70-4ad3-ab82-50da8bea6a42 is Running (Ready = true) Nov 13 00:56:18.155: INFO: Successfully updated pod "annotationupdated9acc5c0-ab70-4ad3-ab82-50da8bea6a42" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:22.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6186" for this suite. • [SLOW TEST:8.604 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":55,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:59.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-2667 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-2667 Nov 13 00:55:59.787: INFO: Found 0 stateful pods, waiting for 1 Nov 13 00:56:09.794: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Nov 13 00:56:09.812: INFO: Deleting all statefulset in ns statefulset-2667 Nov 13 00:56:09.815: INFO: Scaling statefulset ss to 0 Nov 13 00:56:29.828: INFO: Waiting for statefulset status.replicas updated to 0 Nov 13 00:56:29.830: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:29.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2667" for this suite. • [SLOW TEST:30.089 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":11,"skipped":273,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:29.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:29.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6382" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":12,"skipped":274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:19.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1113 00:55:30.034186 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 00:56:32.056: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:32.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9343" for this suite. • [SLOW TEST:72.093 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":6,"skipped":31,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:07.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Nov 13 00:56:07.857: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:32.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3821" for this suite. • [SLOW TEST:24.738 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":8,"skipped":195,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:32.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command Nov 13 00:56:32.618: INFO: Waiting up to 5m0s for pod "client-containers-2544582f-11a0-455c-8b64-f0c6af7b9f2a" in namespace "containers-3215" to be "Succeeded or Failed" Nov 13 00:56:32.620: INFO: Pod "client-containers-2544582f-11a0-455c-8b64-f0c6af7b9f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.945429ms Nov 13 00:56:34.624: INFO: Pod "client-containers-2544582f-11a0-455c-8b64-f0c6af7b9f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006188309s Nov 13 00:56:36.628: INFO: Pod "client-containers-2544582f-11a0-455c-8b64-f0c6af7b9f2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009414984s STEP: Saw pod success Nov 13 00:56:36.628: INFO: Pod "client-containers-2544582f-11a0-455c-8b64-f0c6af7b9f2a" satisfied condition "Succeeded or Failed" Nov 13 00:56:36.631: INFO: Trying to get logs from node node2 pod client-containers-2544582f-11a0-455c-8b64-f0c6af7b9f2a container agnhost-container: STEP: delete the pod Nov 13 00:56:36.644: INFO: Waiting for pod client-containers-2544582f-11a0-455c-8b64-f0c6af7b9f2a to disappear Nov 13 00:56:36.646: INFO: Pod client-containers-2544582f-11a0-455c-8b64-f0c6af7b9f2a no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:36.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3215" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":198,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:22.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:38.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8776" for this suite. • [SLOW TEST:16.126 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":7,"skipped":74,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:29.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Nov 13 00:56:30.022: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7970 4e0eea72-53de-4179-8769-112973dee598 63254 0 2021-11-13 00:56:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-11-13 00:56:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 13 00:56:30.022: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7970 4e0eea72-53de-4179-8769-112973dee598 63255 0 2021-11-13 00:56:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-11-13 00:56:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 13 00:56:30.023: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7970 4e0eea72-53de-4179-8769-112973dee598 63256 0 2021-11-13 00:56:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-11-13 00:56:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Nov 13 00:56:40.049: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7970 4e0eea72-53de-4179-8769-112973dee598 63443 0 2021-11-13 00:56:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-11-13 00:56:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 13 00:56:40.049: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7970 4e0eea72-53de-4179-8769-112973dee598 63444 0 2021-11-13 00:56:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-11-13 00:56:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 13 00:56:40.050: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7970 4e0eea72-53de-4179-8769-112973dee598 63445 0 2021-11-13 00:56:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-11-13 00:56:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:40.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7970" for this suite. • [SLOW TEST:10.071 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":13,"skipped":322,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:21.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Nov 13 00:56:21.514: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:56:23.518: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:56:25.517: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Nov 13 00:56:25.534: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:56:27.536: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:56:29.539: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Nov 13 00:56:29.556: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 13 00:56:29.558: INFO: Pod pod-with-poststart-exec-hook still exists Nov 13 00:56:31.559: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 13 00:56:31.562: INFO: Pod pod-with-poststart-exec-hook still exists Nov 13 00:56:33.559: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 13 00:56:33.562: INFO: Pod pod-with-poststart-exec-hook still exists Nov 13 00:56:35.559: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 13 00:56:35.562: INFO: Pod pod-with-poststart-exec-hook still exists Nov 13 00:56:37.559: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 13 00:56:37.562: INFO: Pod pod-with-poststart-exec-hook still exists Nov 13 00:56:39.560: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 13 00:56:39.564: INFO: Pod pod-with-poststart-exec-hook still exists Nov 13 00:56:41.558: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 13 00:56:41.561: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:41.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8192" for this suite. • [SLOW TEST:20.092 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":276,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:38.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 00:56:38.415: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2e4ccbb-5771-4e08-9b7c-c2af346e7e2d" in namespace "projected-3657" to be "Succeeded or Failed" Nov 13 00:56:38.417: INFO: Pod "downwardapi-volume-a2e4ccbb-5771-4e08-9b7c-c2af346e7e2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030615ms Nov 13 00:56:40.421: INFO: Pod "downwardapi-volume-a2e4ccbb-5771-4e08-9b7c-c2af346e7e2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005828272s Nov 13 00:56:42.428: INFO: Pod "downwardapi-volume-a2e4ccbb-5771-4e08-9b7c-c2af346e7e2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012723136s STEP: Saw pod success Nov 13 00:56:42.428: INFO: Pod "downwardapi-volume-a2e4ccbb-5771-4e08-9b7c-c2af346e7e2d" satisfied condition "Succeeded or Failed" Nov 13 00:56:42.430: INFO: Trying to get logs from node node1 pod downwardapi-volume-a2e4ccbb-5771-4e08-9b7c-c2af346e7e2d container client-container: STEP: delete the pod Nov 13 00:56:42.443: INFO: Waiting for pod downwardapi-volume-a2e4ccbb-5771-4e08-9b7c-c2af346e7e2d to disappear Nov 13 00:56:42.445: INFO: Pod downwardapi-volume-a2e4ccbb-5771-4e08-9b7c-c2af346e7e2d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:42.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3657" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":85,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:41.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 00:56:41.687: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca3e371f-b36d-4d5e-8375-b56a672c1bc6" in namespace "projected-6050" to be "Succeeded or Failed" Nov 13 00:56:41.689: INFO: Pod "downwardapi-volume-ca3e371f-b36d-4d5e-8375-b56a672c1bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.554074ms Nov 13 00:56:43.692: INFO: Pod "downwardapi-volume-ca3e371f-b36d-4d5e-8375-b56a672c1bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005428693s Nov 13 00:56:45.696: INFO: Pod "downwardapi-volume-ca3e371f-b36d-4d5e-8375-b56a672c1bc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009160722s STEP: Saw pod success Nov 13 00:56:45.696: INFO: Pod "downwardapi-volume-ca3e371f-b36d-4d5e-8375-b56a672c1bc6" satisfied condition "Succeeded or Failed" Nov 13 00:56:45.699: INFO: Trying to get logs from node node2 pod downwardapi-volume-ca3e371f-b36d-4d5e-8375-b56a672c1bc6 container client-container: STEP: delete the pod Nov 13 00:56:45.838: INFO: Waiting for pod downwardapi-volume-ca3e371f-b36d-4d5e-8375-b56a672c1bc6 to disappear Nov 13 00:56:45.839: INFO: Pod downwardapi-volume-ca3e371f-b36d-4d5e-8375-b56a672c1bc6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:45.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6050" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":314,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:32.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-l27bn in namespace proxy-6675 I1113 00:56:32.108893 22 runners.go:190] Created replication controller with name: proxy-service-l27bn, namespace: proxy-6675, replica count: 1 I1113 00:56:33.160431 22 runners.go:190] proxy-service-l27bn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:56:34.160707 22 runners.go:190] proxy-service-l27bn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:56:35.161320 22 runners.go:190] proxy-service-l27bn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1113 00:56:36.161609 22 runners.go:190] proxy-service-l27bn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1113 00:56:37.162072 22 runners.go:190] proxy-service-l27bn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1113 00:56:38.163210 22 runners.go:190] proxy-service-l27bn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1113 00:56:39.163590 22 runners.go:190] proxy-service-l27bn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1113 00:56:40.164835 22 runners.go:190] proxy-service-l27bn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1113 00:56:41.165086 22 runners.go:190] proxy-service-l27bn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1113 00:56:42.166353 22 runners.go:190] proxy-service-l27bn Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 13 00:56:42.168: INFO: setup took 10.069910617s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Nov 13 00:56:42.172: INFO: (0) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 3.948525ms) Nov 13 00:56:42.172: INFO: (0) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 3.763346ms) Nov 13 00:56:42.172: INFO: (0) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 3.862565ms) Nov 13 00:56:42.173: INFO: (0) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 3.817207ms) Nov 13 00:56:42.173: INFO: (0) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 4.010402ms) Nov 13 00:56:42.174: INFO: (0) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 5.813357ms) Nov 13 00:56:42.175: INFO: (0) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:1080/proxy/: ... (200; 6.01992ms) Nov 13 00:56:42.175: INFO: (0) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 5.825186ms) Nov 13 00:56:42.175: INFO: (0) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2/proxy/: test (200; 5.893465ms) Nov 13 00:56:42.175: INFO: (0) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 5.808035ms) Nov 13 00:56:42.175: INFO: (0) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 5.963206ms) Nov 13 00:56:42.177: INFO: (0) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: ... (200; 2.284755ms) Nov 13 00:56:42.180: INFO: (1) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.461544ms) Nov 13 00:56:42.180: INFO: (1) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.577264ms) Nov 13 00:56:42.180: INFO: (1) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 3.000726ms) Nov 13 00:56:42.180: INFO: (1) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:460/proxy/: tls baz (200; 3.178709ms) Nov 13 00:56:42.181: INFO: (1) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: test (200; 3.692805ms) Nov 13 00:56:42.181: INFO: (1) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 3.843962ms) Nov 13 00:56:42.181: INFO: (1) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 3.888181ms) Nov 13 00:56:42.181: INFO: (1) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 4.042103ms) Nov 13 00:56:42.181: INFO: (1) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 4.012154ms) Nov 13 00:56:42.181: INFO: (1) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 4.162598ms) Nov 13 00:56:42.181: INFO: (1) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 4.214852ms) Nov 13 00:56:42.184: INFO: (2) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 2.656823ms) Nov 13 00:56:42.184: INFO: (2) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:1080/proxy/: ... (200; 2.643321ms) Nov 13 00:56:42.184: INFO: (2) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:462/proxy/: tls qux (200; 2.912532ms) Nov 13 00:56:42.185: INFO: (2) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 2.992304ms) Nov 13 00:56:42.185: INFO: (2) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.834604ms) Nov 13 00:56:42.185: INFO: (2) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 2.933365ms) Nov 13 00:56:42.185: INFO: (2) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:460/proxy/: tls baz (200; 2.887049ms) Nov 13 00:56:42.185: INFO: (2) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 3.053613ms) Nov 13 00:56:42.185: INFO: (2) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 3.482394ms) Nov 13 00:56:42.185: INFO: (2) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2/proxy/: test (200; 3.31935ms) Nov 13 00:56:42.185: INFO: (2) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: test (200; 3.398027ms) Nov 13 00:56:42.190: INFO: (3) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:462/proxy/: tls qux (200; 3.416978ms) Nov 13 00:56:42.190: INFO: (3) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 3.614743ms) Nov 13 00:56:42.190: INFO: (3) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 3.76771ms) Nov 13 00:56:42.191: INFO: (3) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 4.589129ms) Nov 13 00:56:42.192: INFO: (3) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 5.821418ms) Nov 13 00:56:42.192: INFO: (3) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 5.759625ms) Nov 13 00:56:42.192: INFO: (3) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:1080/proxy/: ... (200; 6.005148ms) Nov 13 00:56:42.192: INFO: (3) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 6.113802ms) Nov 13 00:56:42.192: INFO: (3) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 6.307067ms) Nov 13 00:56:42.193: INFO: (3) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 6.419001ms) Nov 13 00:56:42.195: INFO: (4) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 2.020306ms) Nov 13 00:56:42.195: INFO: (4) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:1080/proxy/: ... (200; 2.258025ms) Nov 13 00:56:42.196: INFO: (4) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:462/proxy/: tls qux (200; 3.337386ms) Nov 13 00:56:42.196: INFO: (4) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 3.144559ms) Nov 13 00:56:42.196: INFO: (4) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 3.306978ms) Nov 13 00:56:42.196: INFO: (4) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 3.444633ms) Nov 13 00:56:42.196: INFO: (4) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2/proxy/: test (200; 3.521726ms) Nov 13 00:56:42.196: INFO: (4) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: test (200; 2.77401ms) Nov 13 00:56:42.200: INFO: (5) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.684263ms) Nov 13 00:56:42.200: INFO: (5) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 3.092399ms) Nov 13 00:56:42.201: INFO: (5) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 3.205285ms) Nov 13 00:56:42.201: INFO: (5) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:1080/proxy/: ... (200; 3.359305ms) Nov 13 00:56:42.201: INFO: (5) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:462/proxy/: tls qux (200; 3.24272ms) Nov 13 00:56:42.201: INFO: (5) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:460/proxy/: tls baz (200; 3.194323ms) Nov 13 00:56:42.201: INFO: (5) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 3.475902ms) Nov 13 00:56:42.201: INFO: (5) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 3.710845ms) Nov 13 00:56:42.201: INFO: (5) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 3.770205ms) Nov 13 00:56:42.201: INFO: (5) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 3.59362ms) Nov 13 00:56:42.201: INFO: (5) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 3.974355ms) Nov 13 00:56:42.201: INFO: (5) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 4.202286ms) Nov 13 00:56:42.203: INFO: (6) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 1.935695ms) Nov 13 00:56:42.204: INFO: (6) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 2.459112ms) Nov 13 00:56:42.204: INFO: (6) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.614373ms) Nov 13 00:56:42.204: INFO: (6) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.792203ms) Nov 13 00:56:42.204: INFO: (6) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:462/proxy/: tls qux (200; 2.880307ms) Nov 13 00:56:42.205: INFO: (6) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: test<... (200; 3.034118ms) Nov 13 00:56:42.205: INFO: (6) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 3.331154ms) Nov 13 00:56:42.205: INFO: (6) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:1080/proxy/: ... (200; 3.13614ms) Nov 13 00:56:42.205: INFO: (6) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2/proxy/: test (200; 3.234276ms) Nov 13 00:56:42.205: INFO: (6) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 3.520342ms) Nov 13 00:56:42.205: INFO: (6) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 3.674418ms) Nov 13 00:56:42.205: INFO: (6) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 3.840794ms) Nov 13 00:56:42.208: INFO: (7) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 2.310974ms) Nov 13 00:56:42.208: INFO: (7) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.540666ms) Nov 13 00:56:42.208: INFO: (7) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.558814ms) Nov 13 00:56:42.208: INFO: (7) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:1080/proxy/: ... (200; 2.447872ms) Nov 13 00:56:42.208: INFO: (7) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2/proxy/: test (200; 2.512221ms) Nov 13 00:56:42.208: INFO: (7) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 2.719093ms) Nov 13 00:56:42.208: INFO: (7) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: ... (200; 2.544921ms) Nov 13 00:56:42.212: INFO: (8) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2/proxy/: test (200; 2.626508ms) Nov 13 00:56:42.213: INFO: (8) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 3.069511ms) Nov 13 00:56:42.213: INFO: (8) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 3.26986ms) Nov 13 00:56:42.213: INFO: (8) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:460/proxy/: tls baz (200; 3.347607ms) Nov 13 00:56:42.213: INFO: (8) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 3.388843ms) Nov 13 00:56:42.213: INFO: (8) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 3.569993ms) Nov 13 00:56:42.213: INFO: (8) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 3.485774ms) Nov 13 00:56:42.214: INFO: (8) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 3.879582ms) Nov 13 00:56:42.214: INFO: (8) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 3.979234ms) Nov 13 00:56:42.214: INFO: (8) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 4.141611ms) Nov 13 00:56:42.216: INFO: (9) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:460/proxy/: tls baz (200; 2.105409ms) Nov 13 00:56:42.216: INFO: (9) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 2.197413ms) Nov 13 00:56:42.216: INFO: (9) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:462/proxy/: tls qux (200; 2.078684ms) Nov 13 00:56:42.216: INFO: (9) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: test<... (200; 2.502816ms) Nov 13 00:56:42.216: INFO: (9) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2/proxy/: test (200; 2.350851ms) Nov 13 00:56:42.217: INFO: (9) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:1080/proxy/: ... (200; 2.722504ms) Nov 13 00:56:42.217: INFO: (9) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.762268ms) Nov 13 00:56:42.217: INFO: (9) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 2.90644ms) Nov 13 00:56:42.217: INFO: (9) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 3.297577ms) Nov 13 00:56:42.217: INFO: (9) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 3.398039ms) Nov 13 00:56:42.218: INFO: (9) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 3.618893ms) Nov 13 00:56:42.218: INFO: (9) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 3.682724ms) Nov 13 00:56:42.218: INFO: (9) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 3.918248ms) Nov 13 00:56:42.218: INFO: (9) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 4.390044ms) Nov 13 00:56:42.219: INFO: (9) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 4.667178ms) Nov 13 00:56:42.221: INFO: (10) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:1080/proxy/: ... (200; 2.52245ms) Nov 13 00:56:42.221: INFO: (10) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2/proxy/: test (200; 2.6336ms) Nov 13 00:56:42.222: INFO: (10) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:460/proxy/: tls baz (200; 2.589589ms) Nov 13 00:56:42.222: INFO: (10) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 2.851109ms) Nov 13 00:56:42.222: INFO: (10) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 3.110507ms) Nov 13 00:56:42.222: INFO: (10) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 3.098368ms) Nov 13 00:56:42.222: INFO: (10) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: test<... (200; 3.272587ms) Nov 13 00:56:42.223: INFO: (10) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 3.498997ms) Nov 13 00:56:42.223: INFO: (10) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 3.796348ms) Nov 13 00:56:42.223: INFO: (10) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 3.785811ms) Nov 13 00:56:42.223: INFO: (10) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 4.191204ms) Nov 13 00:56:42.223: INFO: (10) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 4.589259ms) Nov 13 00:56:42.224: INFO: (10) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 4.595863ms) Nov 13 00:56:42.226: INFO: (11) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:460/proxy/: tls baz (200; 1.950036ms) Nov 13 00:56:42.226: INFO: (11) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.162465ms) Nov 13 00:56:42.226: INFO: (11) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:462/proxy/: tls qux (200; 2.399086ms) Nov 13 00:56:42.226: INFO: (11) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: test (200; 3.315822ms) Nov 13 00:56:42.227: INFO: (11) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:1080/proxy/: ... (200; 3.462345ms) Nov 13 00:56:42.227: INFO: (11) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 3.59541ms) Nov 13 00:56:42.228: INFO: (11) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 3.666818ms) Nov 13 00:56:42.228: INFO: (11) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 3.589661ms) Nov 13 00:56:42.228: INFO: (11) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 4.300335ms) Nov 13 00:56:42.228: INFO: (11) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 4.310841ms) Nov 13 00:56:42.229: INFO: (11) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 4.558177ms) Nov 13 00:56:42.229: INFO: (11) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 4.59188ms) Nov 13 00:56:42.229: INFO: (11) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 4.893059ms) Nov 13 00:56:42.231: INFO: (12) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:1080/proxy/: ... (200; 1.960355ms) Nov 13 00:56:42.231: INFO: (12) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: test (200; 2.32479ms) Nov 13 00:56:42.232: INFO: (12) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 2.676375ms) Nov 13 00:56:42.232: INFO: (12) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:462/proxy/: tls qux (200; 2.823015ms) Nov 13 00:56:42.232: INFO: (12) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.966215ms) Nov 13 00:56:42.232: INFO: (12) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:460/proxy/: tls baz (200; 2.83068ms) Nov 13 00:56:42.232: INFO: (12) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 2.907763ms) Nov 13 00:56:42.232: INFO: (12) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 3.541474ms) Nov 13 00:56:42.232: INFO: (12) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 3.549603ms) Nov 13 00:56:42.233: INFO: (12) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 3.350488ms) Nov 13 00:56:42.233: INFO: (12) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 3.675372ms) Nov 13 00:56:42.233: INFO: (12) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 3.600029ms) Nov 13 00:56:42.233: INFO: (12) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 3.837063ms) Nov 13 00:56:42.233: INFO: (12) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 3.977873ms) Nov 13 00:56:42.236: INFO: (13) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 2.329034ms) Nov 13 00:56:42.236: INFO: (13) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:462/proxy/: tls qux (200; 2.520417ms) Nov 13 00:56:42.236: INFO: (13) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.426907ms) Nov 13 00:56:42.236: INFO: (13) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 2.541526ms) Nov 13 00:56:42.236: INFO: (13) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: ... (200; 3.012852ms) Nov 13 00:56:42.236: INFO: (13) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2/proxy/: test (200; 3.0155ms) Nov 13 00:56:42.236: INFO: (13) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 3.064135ms) Nov 13 00:56:42.237: INFO: (13) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 3.121655ms) Nov 13 00:56:42.237: INFO: (13) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 3.49834ms) Nov 13 00:56:42.237: INFO: (13) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 3.4989ms) Nov 13 00:56:42.237: INFO: (13) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 3.808963ms) Nov 13 00:56:42.237: INFO: (13) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 3.626027ms) Nov 13 00:56:42.237: INFO: (13) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 3.854377ms) Nov 13 00:56:42.240: INFO: (14) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.424269ms) Nov 13 00:56:42.240: INFO: (14) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:1080/proxy/: ... (200; 2.337424ms) Nov 13 00:56:42.240: INFO: (14) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.608212ms) Nov 13 00:56:42.240: INFO: (14) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 2.668939ms) Nov 13 00:56:42.240: INFO: (14) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:462/proxy/: tls qux (200; 2.985822ms) Nov 13 00:56:42.240: INFO: (14) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 2.989916ms) Nov 13 00:56:42.240: INFO: (14) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: test<... (200; 2.934219ms) Nov 13 00:56:42.240: INFO: (14) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 2.928543ms) Nov 13 00:56:42.241: INFO: (14) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 3.620671ms) Nov 13 00:56:42.241: INFO: (14) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2/proxy/: test (200; 3.441503ms) Nov 13 00:56:42.241: INFO: (14) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 3.701438ms) Nov 13 00:56:42.241: INFO: (14) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 3.805982ms) Nov 13 00:56:42.242: INFO: (14) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 4.294153ms) Nov 13 00:56:42.242: INFO: (14) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 4.365682ms) Nov 13 00:56:42.244: INFO: (15) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 1.954679ms) Nov 13 00:56:42.244: INFO: (15) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:1080/proxy/: ... (200; 2.105903ms) Nov 13 00:56:42.245: INFO: (15) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:462/proxy/: tls qux (200; 2.64852ms) Nov 13 00:56:42.245: INFO: (15) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:460/proxy/: tls baz (200; 2.651366ms) Nov 13 00:56:42.245: INFO: (15) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 2.837735ms) Nov 13 00:56:42.245: INFO: (15) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: test (200; 3.258464ms) Nov 13 00:56:42.245: INFO: (15) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 3.464918ms) Nov 13 00:56:42.245: INFO: (15) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 3.425366ms) Nov 13 00:56:42.245: INFO: (15) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 3.450028ms) Nov 13 00:56:42.245: INFO: (15) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 3.614328ms) Nov 13 00:56:42.245: INFO: (15) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 3.475208ms) Nov 13 00:56:42.246: INFO: (15) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 3.821249ms) Nov 13 00:56:42.246: INFO: (15) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 3.737238ms) Nov 13 00:56:42.248: INFO: (16) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 2.284033ms) Nov 13 00:56:42.248: INFO: (16) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: ... (200; 2.723726ms) Nov 13 00:56:42.249: INFO: (16) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 2.987362ms) Nov 13 00:56:42.249: INFO: (16) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2/proxy/: test (200; 2.899453ms) Nov 13 00:56:42.249: INFO: (16) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 3.151497ms) Nov 13 00:56:42.249: INFO: (16) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 3.291992ms) Nov 13 00:56:42.249: INFO: (16) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:460/proxy/: tls baz (200; 3.238971ms) Nov 13 00:56:42.249: INFO: (16) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 3.385427ms) Nov 13 00:56:42.250: INFO: (16) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 3.651946ms) Nov 13 00:56:42.250: INFO: (16) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 4.245561ms) Nov 13 00:56:42.253: INFO: (17) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:462/proxy/: tls qux (200; 2.392392ms) Nov 13 00:56:42.253: INFO: (17) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.439599ms) Nov 13 00:56:42.253: INFO: (17) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: ... (200; 2.779363ms) Nov 13 00:56:42.253: INFO: (17) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2/proxy/: test (200; 2.920646ms) Nov 13 00:56:42.254: INFO: (17) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 3.139513ms) Nov 13 00:56:42.254: INFO: (17) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 3.337851ms) Nov 13 00:56:42.254: INFO: (17) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 3.319023ms) Nov 13 00:56:42.254: INFO: (17) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 3.408082ms) Nov 13 00:56:42.254: INFO: (17) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 3.730604ms) Nov 13 00:56:42.254: INFO: (17) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 3.71732ms) Nov 13 00:56:42.254: INFO: (17) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 3.789023ms) Nov 13 00:56:42.254: INFO: (17) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 3.937057ms) Nov 13 00:56:42.254: INFO: (17) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 4.029454ms) Nov 13 00:56:42.256: INFO: (18) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 1.933887ms) Nov 13 00:56:42.256: INFO: (18) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2/proxy/: test (200; 2.040937ms) Nov 13 00:56:42.257: INFO: (18) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 2.204128ms) Nov 13 00:56:42.257: INFO: (18) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.194726ms) Nov 13 00:56:42.257: INFO: (18) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:1080/proxy/: ... (200; 2.489604ms) Nov 13 00:56:42.257: INFO: (18) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:443/proxy/: ... (200; 1.994643ms) Nov 13 00:56:42.260: INFO: (19) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 2.31613ms) Nov 13 00:56:42.261: INFO: (19) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:460/proxy/: tls baz (200; 2.299095ms) Nov 13 00:56:42.261: INFO: (19) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2/proxy/: test (200; 2.493363ms) Nov 13 00:56:42.261: INFO: (19) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 2.692464ms) Nov 13 00:56:42.261: INFO: (19) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:162/proxy/: bar (200; 2.82549ms) Nov 13 00:56:42.261: INFO: (19) /api/v1/namespaces/proxy-6675/pods/proxy-service-l27bn-dtcf2:1080/proxy/: test<... (200; 2.97417ms) Nov 13 00:56:42.261: INFO: (19) /api/v1/namespaces/proxy-6675/pods/http:proxy-service-l27bn-dtcf2:160/proxy/: foo (200; 3.005578ms) Nov 13 00:56:42.261: INFO: (19) /api/v1/namespaces/proxy-6675/pods/https:proxy-service-l27bn-dtcf2:462/proxy/: tls qux (200; 3.209754ms) Nov 13 00:56:42.261: INFO: (19) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname2/proxy/: tls qux (200; 3.323861ms) Nov 13 00:56:42.261: INFO: (19) /api/v1/namespaces/proxy-6675/services/https:proxy-service-l27bn:tlsportname1/proxy/: tls baz (200; 3.435603ms) Nov 13 00:56:42.262: INFO: (19) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname1/proxy/: foo (200; 3.656603ms) Nov 13 00:56:42.262: INFO: (19) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname1/proxy/: foo (200; 4.00239ms) Nov 13 00:56:42.262: INFO: (19) /api/v1/namespaces/proxy-6675/services/http:proxy-service-l27bn:portname2/proxy/: bar (200; 3.947695ms) Nov 13 00:56:42.263: INFO: (19) /api/v1/namespaces/proxy-6675/services/proxy-service-l27bn:portname2/proxy/: bar (200; 4.432197ms) STEP: deleting ReplicationController proxy-service-l27bn in namespace proxy-6675, will wait for the garbage collector to delete the pods Nov 13 00:56:42.322: INFO: Deleting ReplicationController proxy-service-l27bn took: 4.971248ms Nov 13 00:56:42.423: INFO: Terminating ReplicationController proxy-service-l27bn pods took: 100.874924ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:51.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6675" for this suite. • [SLOW TEST:19.459 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":7,"skipped":33,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:51.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 00:56:51.581: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6970756-ba97-4ca5-a294-12be8af47af5" in namespace "projected-7702" to be "Succeeded or Failed" Nov 13 00:56:51.584: INFO: Pod "downwardapi-volume-d6970756-ba97-4ca5-a294-12be8af47af5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.962506ms Nov 13 00:56:53.588: INFO: Pod "downwardapi-volume-d6970756-ba97-4ca5-a294-12be8af47af5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006752573s Nov 13 00:56:55.591: INFO: Pod "downwardapi-volume-d6970756-ba97-4ca5-a294-12be8af47af5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009159834s STEP: Saw pod success Nov 13 00:56:55.591: INFO: Pod "downwardapi-volume-d6970756-ba97-4ca5-a294-12be8af47af5" satisfied condition "Succeeded or Failed" Nov 13 00:56:55.593: INFO: Trying to get logs from node node1 pod downwardapi-volume-d6970756-ba97-4ca5-a294-12be8af47af5 container client-container: STEP: delete the pod Nov 13 00:56:55.652: INFO: Waiting for pod downwardapi-volume-d6970756-ba97-4ca5-a294-12be8af47af5 to disappear Nov 13 00:56:55.655: INFO: Pod downwardapi-volume-d6970756-ba97-4ca5-a294-12be8af47af5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:55.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7702" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":38,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:42.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components Nov 13 00:56:42.524: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Nov 13 00:56:42.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7362 create -f -' Nov 13 00:56:42.949: INFO: stderr: "" Nov 13 00:56:42.949: INFO: stdout: "service/agnhost-replica created\n" Nov 13 00:56:42.949: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Nov 13 00:56:42.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7362 create -f -' Nov 13 00:56:43.331: INFO: stderr: "" Nov 13 00:56:43.331: INFO: stdout: "service/agnhost-primary created\n" Nov 13 00:56:43.331: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Nov 13 00:56:43.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7362 create -f -' Nov 13 00:56:43.700: INFO: stderr: "" Nov 13 00:56:43.700: INFO: stdout: "service/frontend created\n" Nov 13 00:56:43.700: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Nov 13 00:56:43.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7362 create -f -' Nov 13 00:56:44.044: INFO: stderr: "" Nov 13 00:56:44.044: INFO: stdout: "deployment.apps/frontend created\n" Nov 13 00:56:44.045: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Nov 13 00:56:44.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7362 create -f -' Nov 13 00:56:44.410: INFO: stderr: "" Nov 13 00:56:44.410: INFO: stdout: "deployment.apps/agnhost-primary created\n" Nov 13 00:56:44.410: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Nov 13 00:56:44.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7362 create -f -' Nov 13 00:56:44.759: INFO: stderr: "" Nov 13 00:56:44.759: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Nov 13 00:56:44.759: INFO: Waiting for all frontend pods to be Running. Nov 13 00:56:54.812: INFO: Waiting for frontend to serve content. Nov 13 00:56:54.819: INFO: Trying to add a new entry to the guestbook. Nov 13 00:56:54.828: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Nov 13 00:56:54.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7362 delete --grace-period=0 --force -f -' Nov 13 00:56:54.966: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 13 00:56:54.966: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Nov 13 00:56:54.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7362 delete --grace-period=0 --force -f -' Nov 13 00:56:55.107: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 13 00:56:55.107: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Nov 13 00:56:55.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7362 delete --grace-period=0 --force -f -' Nov 13 00:56:55.249: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 13 00:56:55.249: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Nov 13 00:56:55.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7362 delete --grace-period=0 --force -f -' Nov 13 00:56:55.394: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 13 00:56:55.394: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Nov 13 00:56:55.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7362 delete --grace-period=0 --force -f -' Nov 13 00:56:55.523: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 13 00:56:55.523: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Nov 13 00:56:55.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7362 delete --grace-period=0 --force -f -' Nov 13 00:56:55.658: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 13 00:56:55.658: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:55.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7362" for this suite. • [SLOW TEST:13.165 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":9,"skipped":109,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:45.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-6953 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6953 to expose endpoints map[] Nov 13 00:56:45.926: INFO: successfully validated that service endpoint-test2 in namespace services-6953 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6953 Nov 13 00:56:45.941: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:56:47.944: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:56:49.945: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:56:51.945: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6953 to expose endpoints map[pod1:[80]] Nov 13 00:56:51.957: INFO: successfully validated that service endpoint-test2 in namespace services-6953 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-6953 Nov 13 00:56:51.971: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:56:53.975: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:56:55.975: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6953 to expose endpoints map[pod1:[80] pod2:[80]] Nov 13 00:56:55.987: INFO: successfully validated that service endpoint-test2 in namespace services-6953 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-6953 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6953 to expose endpoints map[pod2:[80]] Nov 13 00:56:56.002: INFO: successfully validated that service endpoint-test2 in namespace services-6953 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-6953 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6953 to expose endpoints map[] Nov 13 00:56:56.013: INFO: successfully validated that service endpoint-test2 in namespace services-6953 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:56:56.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6953" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:10.135 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:55.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:01.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8577" for this suite. • [SLOW TEST:6.052 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":112,"failed":0} SSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":16,"skipped":332,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:56.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Nov 13 00:56:56.064: INFO: Waiting up to 5m0s for pod "pod-b907bf3b-822d-43ed-a21f-5f18c6e466b7" in namespace "emptydir-9785" to be "Succeeded or Failed" Nov 13 00:56:56.067: INFO: Pod "pod-b907bf3b-822d-43ed-a21f-5f18c6e466b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.450721ms Nov 13 00:56:58.071: INFO: Pod "pod-b907bf3b-822d-43ed-a21f-5f18c6e466b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006614641s Nov 13 00:57:00.074: INFO: Pod "pod-b907bf3b-822d-43ed-a21f-5f18c6e466b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009935466s Nov 13 00:57:02.077: INFO: Pod "pod-b907bf3b-822d-43ed-a21f-5f18c6e466b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013271842s Nov 13 00:57:04.082: INFO: Pod "pod-b907bf3b-822d-43ed-a21f-5f18c6e466b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017744416s STEP: Saw pod success Nov 13 00:57:04.082: INFO: Pod "pod-b907bf3b-822d-43ed-a21f-5f18c6e466b7" satisfied condition "Succeeded or Failed" Nov 13 00:57:04.085: INFO: Trying to get logs from node node2 pod pod-b907bf3b-822d-43ed-a21f-5f18c6e466b7 container test-container: STEP: delete the pod Nov 13 00:57:04.099: INFO: Waiting for pod pod-b907bf3b-822d-43ed-a21f-5f18c6e466b7 to disappear Nov 13 00:57:04.100: INFO: Pod pod-b907bf3b-822d-43ed-a21f-5f18c6e466b7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:04.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9785" for this suite. • [SLOW TEST:8.078 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":332,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:04.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Nov 13 00:57:04.168: INFO: observed Pod pod-test in namespace pods-6222 in phase Pending with labels: map[test-pod-static:true] & conditions [] Nov 13 00:57:04.170: INFO: observed Pod pod-test in namespace pods-6222 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:57:04 +0000 UTC }] Nov 13 00:57:04.185: INFO: observed Pod pod-test in namespace pods-6222 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:57:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:57:04 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:57:04 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:57:04 +0000 UTC }] Nov 13 00:57:05.971: INFO: observed Pod pod-test in namespace pods-6222 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:57:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:57:04 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:57:04 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:57:04 +0000 UTC }] Nov 13 00:57:07.702: INFO: Found Pod pod-test in namespace pods-6222 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:57:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:57:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:57:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:57:04 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Nov 13 00:57:07.713: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Nov 13 00:57:07.730: INFO: observed event type ADDED Nov 13 00:57:07.730: INFO: observed event type MODIFIED Nov 13 00:57:07.731: INFO: observed event type MODIFIED Nov 13 00:57:07.731: INFO: observed event type MODIFIED Nov 13 00:57:07.731: INFO: observed event type MODIFIED Nov 13 00:57:07.731: INFO: observed event type MODIFIED Nov 13 00:57:07.731: INFO: observed event type MODIFIED Nov 13 00:57:07.731: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:07.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6222" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":18,"skipped":336,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:07.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Nov 13 00:57:07.804: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:07.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2957" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":19,"skipped":349,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:01.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Nov 13 00:57:01.771: INFO: namespace kubectl-8350 Nov 13 00:57:01.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8350 create -f -' Nov 13 00:57:02.121: INFO: stderr: "" Nov 13 00:57:02.121: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Nov 13 00:57:03.125: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 00:57:03.125: INFO: Found 0 / 1 Nov 13 00:57:04.124: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 00:57:04.124: INFO: Found 0 / 1 Nov 13 00:57:05.124: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 00:57:05.124: INFO: Found 1 / 1 Nov 13 00:57:05.124: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Nov 13 00:57:05.127: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 00:57:05.127: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 13 00:57:05.127: INFO: wait on agnhost-primary startup in kubectl-8350 Nov 13 00:57:05.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8350 logs agnhost-primary-gkxbm agnhost-primary' Nov 13 00:57:05.321: INFO: stderr: "" Nov 13 00:57:05.321: INFO: stdout: "Paused\n" STEP: exposing RC Nov 13 00:57:05.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8350 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Nov 13 00:57:05.540: INFO: stderr: "" Nov 13 00:57:05.540: INFO: stdout: "service/rm2 exposed\n" Nov 13 00:57:05.542: INFO: Service rm2 in namespace kubectl-8350 found. STEP: exposing service Nov 13 00:57:07.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8350 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Nov 13 00:57:07.747: INFO: stderr: "" Nov 13 00:57:07.747: INFO: stdout: "service/rm3 exposed\n" Nov 13 00:57:07.750: INFO: Service rm3 in namespace kubectl-8350 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:09.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8350" for this suite. • [SLOW TEST:8.019 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":11,"skipped":119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:55.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:56:55.714: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6114 I1113 00:56:55.732282 22 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6114, replica count: 1 I1113 00:56:56.783506 22 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:56:57.784498 22 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:56:58.786297 22 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:56:59.786560 22 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:57:00.787035 22 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:57:01.787901 22 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:57:02.789034 22 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 13 00:57:02.897: INFO: Created: latency-svc-w5bkp Nov 13 00:57:02.902: INFO: Got endpoints: latency-svc-w5bkp [12.918948ms] Nov 13 00:57:02.909: INFO: Created: latency-svc-76bp6 Nov 13 00:57:02.911: INFO: Created: latency-svc-lmfgw Nov 13 00:57:02.911: INFO: Got endpoints: latency-svc-76bp6 [8.584131ms] Nov 13 00:57:02.913: INFO: Got endpoints: latency-svc-lmfgw [10.662282ms] Nov 13 00:57:02.914: INFO: Created: latency-svc-sz2hp Nov 13 00:57:02.916: INFO: Got endpoints: latency-svc-sz2hp [13.625802ms] Nov 13 00:57:02.917: INFO: Created: latency-svc-g9mrm Nov 13 00:57:02.919: INFO: Created: latency-svc-d46lx Nov 13 00:57:02.919: INFO: Got endpoints: latency-svc-g9mrm [16.19499ms] Nov 13 00:57:02.921: INFO: Got endpoints: latency-svc-d46lx [18.081988ms] Nov 13 00:57:02.923: INFO: Created: latency-svc-7d6ct Nov 13 00:57:02.925: INFO: Got endpoints: latency-svc-7d6ct [22.325326ms] Nov 13 00:57:02.926: INFO: Created: latency-svc-8xq27 Nov 13 00:57:02.928: INFO: Got endpoints: latency-svc-8xq27 [25.497878ms] Nov 13 00:57:02.929: INFO: Created: latency-svc-lhf8h Nov 13 00:57:02.932: INFO: Got endpoints: latency-svc-lhf8h [28.999177ms] Nov 13 00:57:02.932: INFO: Created: latency-svc-lnt9b Nov 13 00:57:02.935: INFO: Got endpoints: latency-svc-lnt9b [31.953447ms] Nov 13 00:57:02.935: INFO: Created: latency-svc-pnwdp Nov 13 00:57:02.937: INFO: Got endpoints: latency-svc-pnwdp [34.032563ms] Nov 13 00:57:02.937: INFO: Created: latency-svc-9bj2b Nov 13 00:57:02.939: INFO: Got endpoints: latency-svc-9bj2b [36.545253ms] Nov 13 00:57:02.940: INFO: Created: latency-svc-8wlzb Nov 13 00:57:02.943: INFO: Got endpoints: latency-svc-8wlzb [8.263411ms] Nov 13 00:57:02.944: INFO: Created: latency-svc-vwxtr Nov 13 00:57:02.946: INFO: Created: latency-svc-9p924 Nov 13 00:57:02.946: INFO: Got endpoints: latency-svc-vwxtr [42.727325ms] Nov 13 00:57:02.948: INFO: Got endpoints: latency-svc-9p924 [44.558822ms] Nov 13 00:57:02.949: INFO: Created: latency-svc-69dd8 Nov 13 00:57:02.951: INFO: Got endpoints: latency-svc-69dd8 [47.654939ms] Nov 13 00:57:02.952: INFO: Created: latency-svc-lkx9z Nov 13 00:57:02.955: INFO: Got endpoints: latency-svc-lkx9z [51.91704ms] Nov 13 00:57:02.957: INFO: Created: latency-svc-srfcd Nov 13 00:57:02.959: INFO: Got endpoints: latency-svc-srfcd [48.037908ms] Nov 13 00:57:02.960: INFO: Created: latency-svc-hl6pd Nov 13 00:57:02.962: INFO: Got endpoints: latency-svc-hl6pd [48.918236ms] Nov 13 00:57:02.963: INFO: Created: latency-svc-9g58b Nov 13 00:57:02.966: INFO: Got endpoints: latency-svc-9g58b [49.085067ms] Nov 13 00:57:02.966: INFO: Created: latency-svc-9rbt7 Nov 13 00:57:02.968: INFO: Got endpoints: latency-svc-9rbt7 [49.217787ms] Nov 13 00:57:02.969: INFO: Created: latency-svc-t9k86 Nov 13 00:57:02.971: INFO: Got endpoints: latency-svc-t9k86 [49.753062ms] Nov 13 00:57:02.972: INFO: Created: latency-svc-nqqzg Nov 13 00:57:02.975: INFO: Created: latency-svc-rjbvs Nov 13 00:57:02.976: INFO: Got endpoints: latency-svc-nqqzg [50.259798ms] Nov 13 00:57:02.978: INFO: Got endpoints: latency-svc-rjbvs [49.921664ms] Nov 13 00:57:02.979: INFO: Created: latency-svc-t7r8w Nov 13 00:57:02.982: INFO: Got endpoints: latency-svc-t7r8w [50.000062ms] Nov 13 00:57:02.982: INFO: Created: latency-svc-crs7t Nov 13 00:57:02.984: INFO: Got endpoints: latency-svc-crs7t [47.502728ms] Nov 13 00:57:02.985: INFO: Created: latency-svc-8pvmv Nov 13 00:57:02.987: INFO: Got endpoints: latency-svc-8pvmv [47.762362ms] Nov 13 00:57:02.988: INFO: Created: latency-svc-kf29d Nov 13 00:57:02.990: INFO: Got endpoints: latency-svc-kf29d [46.585489ms] Nov 13 00:57:02.991: INFO: Created: latency-svc-xmzfb Nov 13 00:57:02.992: INFO: Got endpoints: latency-svc-xmzfb [46.588957ms] Nov 13 00:57:02.992: INFO: Created: latency-svc-6xjlq Nov 13 00:57:02.995: INFO: Got endpoints: latency-svc-6xjlq [47.276281ms] Nov 13 00:57:02.996: INFO: Created: latency-svc-jg5b5 Nov 13 00:57:02.998: INFO: Got endpoints: latency-svc-jg5b5 [47.042068ms] Nov 13 00:57:03.000: INFO: Created: latency-svc-7wvkx Nov 13 00:57:03.002: INFO: Got endpoints: latency-svc-7wvkx [46.70784ms] Nov 13 00:57:03.002: INFO: Created: latency-svc-fplm7 Nov 13 00:57:03.004: INFO: Created: latency-svc-46wfs Nov 13 00:57:03.008: INFO: Created: latency-svc-w8xnn Nov 13 00:57:03.009: INFO: Created: latency-svc-27djl Nov 13 00:57:03.012: INFO: Created: latency-svc-5hvzh Nov 13 00:57:03.015: INFO: Created: latency-svc-h6gds Nov 13 00:57:03.018: INFO: Created: latency-svc-zr8kc Nov 13 00:57:03.020: INFO: Created: latency-svc-2nxtd Nov 13 00:57:03.023: INFO: Created: latency-svc-6trrw Nov 13 00:57:03.025: INFO: Created: latency-svc-zfkzm Nov 13 00:57:03.029: INFO: Created: latency-svc-jlj9f Nov 13 00:57:03.032: INFO: Created: latency-svc-jspkp Nov 13 00:57:03.033: INFO: Created: latency-svc-qz26s Nov 13 00:57:03.036: INFO: Created: latency-svc-fdzsz Nov 13 00:57:03.038: INFO: Created: latency-svc-4rv89 Nov 13 00:57:03.050: INFO: Got endpoints: latency-svc-fplm7 [91.191353ms] Nov 13 00:57:03.055: INFO: Created: latency-svc-8gqdr Nov 13 00:57:03.101: INFO: Got endpoints: latency-svc-46wfs [138.697481ms] Nov 13 00:57:03.107: INFO: Created: latency-svc-96p5t Nov 13 00:57:03.150: INFO: Got endpoints: latency-svc-w8xnn [184.7789ms] Nov 13 00:57:03.155: INFO: Created: latency-svc-bpkqr Nov 13 00:57:03.201: INFO: Got endpoints: latency-svc-27djl [232.174475ms] Nov 13 00:57:03.206: INFO: Created: latency-svc-p7pkw Nov 13 00:57:03.250: INFO: Got endpoints: latency-svc-5hvzh [278.920879ms] Nov 13 00:57:03.256: INFO: Created: latency-svc-5w7p4 Nov 13 00:57:03.300: INFO: Got endpoints: latency-svc-h6gds [324.477515ms] Nov 13 00:57:03.306: INFO: Created: latency-svc-wkgq8 Nov 13 00:57:03.351: INFO: Got endpoints: latency-svc-zr8kc [372.669067ms] Nov 13 00:57:03.356: INFO: Created: latency-svc-ph2rg Nov 13 00:57:03.400: INFO: Got endpoints: latency-svc-2nxtd [417.959731ms] Nov 13 00:57:03.406: INFO: Created: latency-svc-pkxwr Nov 13 00:57:03.449: INFO: Got endpoints: latency-svc-6trrw [464.947724ms] Nov 13 00:57:03.455: INFO: Created: latency-svc-nckmq Nov 13 00:57:03.500: INFO: Got endpoints: latency-svc-zfkzm [512.906222ms] Nov 13 00:57:03.506: INFO: Created: latency-svc-tznbt Nov 13 00:57:03.550: INFO: Got endpoints: latency-svc-jlj9f [560.749989ms] Nov 13 00:57:03.557: INFO: Created: latency-svc-fcd5x Nov 13 00:57:03.600: INFO: Got endpoints: latency-svc-jspkp [608.157302ms] Nov 13 00:57:03.607: INFO: Created: latency-svc-9zb5j Nov 13 00:57:03.650: INFO: Got endpoints: latency-svc-qz26s [654.932382ms] Nov 13 00:57:03.655: INFO: Created: latency-svc-86gvg Nov 13 00:57:03.701: INFO: Got endpoints: latency-svc-fdzsz [703.420024ms] Nov 13 00:57:03.707: INFO: Created: latency-svc-j6l6f Nov 13 00:57:03.750: INFO: Got endpoints: latency-svc-4rv89 [748.421327ms] Nov 13 00:57:03.755: INFO: Created: latency-svc-pwfwq Nov 13 00:57:03.801: INFO: Got endpoints: latency-svc-8gqdr [750.599831ms] Nov 13 00:57:03.806: INFO: Created: latency-svc-k448v Nov 13 00:57:03.851: INFO: Got endpoints: latency-svc-96p5t [750.222821ms] Nov 13 00:57:03.857: INFO: Created: latency-svc-r27cp Nov 13 00:57:03.900: INFO: Got endpoints: latency-svc-bpkqr [749.205065ms] Nov 13 00:57:03.906: INFO: Created: latency-svc-29ltg Nov 13 00:57:03.951: INFO: Got endpoints: latency-svc-p7pkw [750.571678ms] Nov 13 00:57:03.957: INFO: Created: latency-svc-h2vjm Nov 13 00:57:04.001: INFO: Got endpoints: latency-svc-5w7p4 [750.909191ms] Nov 13 00:57:04.007: INFO: Created: latency-svc-672kd Nov 13 00:57:04.050: INFO: Got endpoints: latency-svc-wkgq8 [750.123952ms] Nov 13 00:57:04.057: INFO: Created: latency-svc-t5sgt Nov 13 00:57:04.100: INFO: Got endpoints: latency-svc-ph2rg [749.070565ms] Nov 13 00:57:04.105: INFO: Created: latency-svc-j6gzh Nov 13 00:57:04.151: INFO: Got endpoints: latency-svc-pkxwr [750.789442ms] Nov 13 00:57:04.158: INFO: Created: latency-svc-nzh7c Nov 13 00:57:04.201: INFO: Got endpoints: latency-svc-nckmq [751.216166ms] Nov 13 00:57:04.206: INFO: Created: latency-svc-tpf5h Nov 13 00:57:04.250: INFO: Got endpoints: latency-svc-tznbt [749.41188ms] Nov 13 00:57:04.255: INFO: Created: latency-svc-vj4qc Nov 13 00:57:04.301: INFO: Got endpoints: latency-svc-fcd5x [750.383167ms] Nov 13 00:57:04.307: INFO: Created: latency-svc-2zrpc Nov 13 00:57:04.351: INFO: Got endpoints: latency-svc-9zb5j [750.179659ms] Nov 13 00:57:04.358: INFO: Created: latency-svc-flk7h Nov 13 00:57:04.401: INFO: Got endpoints: latency-svc-86gvg [750.946895ms] Nov 13 00:57:04.408: INFO: Created: latency-svc-gbxd9 Nov 13 00:57:04.451: INFO: Got endpoints: latency-svc-j6l6f [749.681088ms] Nov 13 00:57:04.457: INFO: Created: latency-svc-8r8x2 Nov 13 00:57:04.500: INFO: Got endpoints: latency-svc-pwfwq [749.711529ms] Nov 13 00:57:04.506: INFO: Created: latency-svc-g2hvb Nov 13 00:57:04.551: INFO: Got endpoints: latency-svc-k448v [749.992457ms] Nov 13 00:57:04.558: INFO: Created: latency-svc-w25jl Nov 13 00:57:04.601: INFO: Got endpoints: latency-svc-r27cp [749.22112ms] Nov 13 00:57:04.607: INFO: Created: latency-svc-s7jqn Nov 13 00:57:04.650: INFO: Got endpoints: latency-svc-29ltg [750.732421ms] Nov 13 00:57:04.656: INFO: Created: latency-svc-rh92w Nov 13 00:57:04.700: INFO: Got endpoints: latency-svc-h2vjm [748.700613ms] Nov 13 00:57:04.706: INFO: Created: latency-svc-pwh7m Nov 13 00:57:04.750: INFO: Got endpoints: latency-svc-672kd [749.480435ms] Nov 13 00:57:04.756: INFO: Created: latency-svc-kpd44 Nov 13 00:57:04.800: INFO: Got endpoints: latency-svc-t5sgt [749.253183ms] Nov 13 00:57:04.807: INFO: Created: latency-svc-269gn Nov 13 00:57:04.849: INFO: Got endpoints: latency-svc-j6gzh [749.315845ms] Nov 13 00:57:04.855: INFO: Created: latency-svc-q6hjq Nov 13 00:57:04.901: INFO: Got endpoints: latency-svc-nzh7c [749.865473ms] Nov 13 00:57:04.906: INFO: Created: latency-svc-gd26q Nov 13 00:57:04.952: INFO: Got endpoints: latency-svc-tpf5h [751.059271ms] Nov 13 00:57:04.958: INFO: Created: latency-svc-z585q Nov 13 00:57:05.001: INFO: Got endpoints: latency-svc-vj4qc [750.792884ms] Nov 13 00:57:05.007: INFO: Created: latency-svc-b5ndv Nov 13 00:57:05.050: INFO: Got endpoints: latency-svc-2zrpc [749.647177ms] Nov 13 00:57:05.056: INFO: Created: latency-svc-rx5bd Nov 13 00:57:05.100: INFO: Got endpoints: latency-svc-flk7h [749.594817ms] Nov 13 00:57:05.106: INFO: Created: latency-svc-7tnlk Nov 13 00:57:05.151: INFO: Got endpoints: latency-svc-gbxd9 [750.189955ms] Nov 13 00:57:05.157: INFO: Created: latency-svc-stzc7 Nov 13 00:57:05.200: INFO: Got endpoints: latency-svc-8r8x2 [749.004659ms] Nov 13 00:57:05.205: INFO: Created: latency-svc-nqm5m Nov 13 00:57:05.251: INFO: Got endpoints: latency-svc-g2hvb [750.852947ms] Nov 13 00:57:05.257: INFO: Created: latency-svc-54qjn Nov 13 00:57:05.301: INFO: Got endpoints: latency-svc-w25jl [750.012977ms] Nov 13 00:57:05.307: INFO: Created: latency-svc-xlqvb Nov 13 00:57:05.351: INFO: Got endpoints: latency-svc-s7jqn [750.136662ms] Nov 13 00:57:05.356: INFO: Created: latency-svc-lmdkn Nov 13 00:57:05.400: INFO: Got endpoints: latency-svc-rh92w [749.853961ms] Nov 13 00:57:05.406: INFO: Created: latency-svc-6jqqx Nov 13 00:57:05.451: INFO: Got endpoints: latency-svc-pwh7m [750.534383ms] Nov 13 00:57:05.456: INFO: Created: latency-svc-n96zb Nov 13 00:57:05.500: INFO: Got endpoints: latency-svc-kpd44 [749.451428ms] Nov 13 00:57:05.505: INFO: Created: latency-svc-gcn44 Nov 13 00:57:05.550: INFO: Got endpoints: latency-svc-269gn [750.368613ms] Nov 13 00:57:05.555: INFO: Created: latency-svc-p9r8p Nov 13 00:57:05.600: INFO: Got endpoints: latency-svc-q6hjq [750.027959ms] Nov 13 00:57:05.605: INFO: Created: latency-svc-ksbc2 Nov 13 00:57:05.651: INFO: Got endpoints: latency-svc-gd26q [749.896412ms] Nov 13 00:57:05.657: INFO: Created: latency-svc-gsmbm Nov 13 00:57:05.700: INFO: Got endpoints: latency-svc-z585q [748.470049ms] Nov 13 00:57:05.707: INFO: Created: latency-svc-r57jl Nov 13 00:57:05.750: INFO: Got endpoints: latency-svc-b5ndv [749.6787ms] Nov 13 00:57:05.757: INFO: Created: latency-svc-txgs9 Nov 13 00:57:05.801: INFO: Got endpoints: latency-svc-rx5bd [750.33915ms] Nov 13 00:57:05.808: INFO: Created: latency-svc-s2q5x Nov 13 00:57:05.850: INFO: Got endpoints: latency-svc-7tnlk [750.084841ms] Nov 13 00:57:05.855: INFO: Created: latency-svc-m47qb Nov 13 00:57:05.900: INFO: Got endpoints: latency-svc-stzc7 [749.308902ms] Nov 13 00:57:05.906: INFO: Created: latency-svc-f4x6w Nov 13 00:57:05.951: INFO: Got endpoints: latency-svc-nqm5m [751.000304ms] Nov 13 00:57:05.957: INFO: Created: latency-svc-48nvv Nov 13 00:57:06.001: INFO: Got endpoints: latency-svc-54qjn [750.338892ms] Nov 13 00:57:06.007: INFO: Created: latency-svc-wnqht Nov 13 00:57:06.050: INFO: Got endpoints: latency-svc-xlqvb [748.395928ms] Nov 13 00:57:06.055: INFO: Created: latency-svc-49smb Nov 13 00:57:06.100: INFO: Got endpoints: latency-svc-lmdkn [749.317443ms] Nov 13 00:57:06.106: INFO: Created: latency-svc-v2k57 Nov 13 00:57:06.150: INFO: Got endpoints: latency-svc-6jqqx [750.116399ms] Nov 13 00:57:06.156: INFO: Created: latency-svc-mfr7c Nov 13 00:57:06.200: INFO: Got endpoints: latency-svc-n96zb [749.650686ms] Nov 13 00:57:06.206: INFO: Created: latency-svc-k6b2k Nov 13 00:57:06.250: INFO: Got endpoints: latency-svc-gcn44 [750.600192ms] Nov 13 00:57:06.256: INFO: Created: latency-svc-htj92 Nov 13 00:57:06.351: INFO: Got endpoints: latency-svc-p9r8p [801.250995ms] Nov 13 00:57:06.357: INFO: Created: latency-svc-wq2p8 Nov 13 00:57:06.401: INFO: Got endpoints: latency-svc-ksbc2 [800.98692ms] Nov 13 00:57:06.406: INFO: Created: latency-svc-dct42 Nov 13 00:57:06.451: INFO: Got endpoints: latency-svc-gsmbm [799.960073ms] Nov 13 00:57:06.457: INFO: Created: latency-svc-qp58h Nov 13 00:57:06.501: INFO: Got endpoints: latency-svc-r57jl [800.478395ms] Nov 13 00:57:06.506: INFO: Created: latency-svc-ld64n Nov 13 00:57:06.550: INFO: Got endpoints: latency-svc-txgs9 [799.764158ms] Nov 13 00:57:06.555: INFO: Created: latency-svc-kqxgz Nov 13 00:57:06.600: INFO: Got endpoints: latency-svc-s2q5x [799.515223ms] Nov 13 00:57:06.607: INFO: Created: latency-svc-pwpp2 Nov 13 00:57:06.650: INFO: Got endpoints: latency-svc-m47qb [799.059186ms] Nov 13 00:57:06.656: INFO: Created: latency-svc-zwvk9 Nov 13 00:57:06.701: INFO: Got endpoints: latency-svc-f4x6w [800.057515ms] Nov 13 00:57:06.706: INFO: Created: latency-svc-cv8g6 Nov 13 00:57:06.749: INFO: Got endpoints: latency-svc-48nvv [798.317154ms] Nov 13 00:57:06.755: INFO: Created: latency-svc-hr2d4 Nov 13 00:57:06.799: INFO: Got endpoints: latency-svc-wnqht [798.295892ms] Nov 13 00:57:06.805: INFO: Created: latency-svc-7v7c5 Nov 13 00:57:06.850: INFO: Got endpoints: latency-svc-49smb [800.657733ms] Nov 13 00:57:06.856: INFO: Created: latency-svc-rjkwc Nov 13 00:57:06.900: INFO: Got endpoints: latency-svc-v2k57 [799.420394ms] Nov 13 00:57:06.905: INFO: Created: latency-svc-m26sn Nov 13 00:57:06.950: INFO: Got endpoints: latency-svc-mfr7c [799.199988ms] Nov 13 00:57:06.958: INFO: Created: latency-svc-nh9n5 Nov 13 00:57:07.000: INFO: Got endpoints: latency-svc-k6b2k [799.490907ms] Nov 13 00:57:07.007: INFO: Created: latency-svc-jvr6x Nov 13 00:57:07.050: INFO: Got endpoints: latency-svc-htj92 [799.567978ms] Nov 13 00:57:07.056: INFO: Created: latency-svc-zh78c Nov 13 00:57:07.100: INFO: Got endpoints: latency-svc-wq2p8 [748.343854ms] Nov 13 00:57:07.105: INFO: Created: latency-svc-g7gv5 Nov 13 00:57:07.150: INFO: Got endpoints: latency-svc-dct42 [749.527191ms] Nov 13 00:57:07.156: INFO: Created: latency-svc-pcxk7 Nov 13 00:57:07.201: INFO: Got endpoints: latency-svc-qp58h [750.197785ms] Nov 13 00:57:07.207: INFO: Created: latency-svc-kppzf Nov 13 00:57:07.251: INFO: Got endpoints: latency-svc-ld64n [750.239663ms] Nov 13 00:57:07.257: INFO: Created: latency-svc-wfgqc Nov 13 00:57:07.299: INFO: Got endpoints: latency-svc-kqxgz [749.134157ms] Nov 13 00:57:07.305: INFO: Created: latency-svc-245rx Nov 13 00:57:07.349: INFO: Got endpoints: latency-svc-pwpp2 [748.854747ms] Nov 13 00:57:07.355: INFO: Created: latency-svc-sdjdv Nov 13 00:57:07.400: INFO: Got endpoints: latency-svc-zwvk9 [750.039089ms] Nov 13 00:57:07.405: INFO: Created: latency-svc-wf54t Nov 13 00:57:07.451: INFO: Got endpoints: latency-svc-cv8g6 [750.155561ms] Nov 13 00:57:07.456: INFO: Created: latency-svc-cxcjj Nov 13 00:57:07.500: INFO: Got endpoints: latency-svc-hr2d4 [750.136681ms] Nov 13 00:57:07.504: INFO: Created: latency-svc-sqrrs Nov 13 00:57:07.550: INFO: Got endpoints: latency-svc-7v7c5 [749.973353ms] Nov 13 00:57:07.556: INFO: Created: latency-svc-7q4gh Nov 13 00:57:07.601: INFO: Got endpoints: latency-svc-rjkwc [750.953913ms] Nov 13 00:57:07.607: INFO: Created: latency-svc-nqccr Nov 13 00:57:07.650: INFO: Got endpoints: latency-svc-m26sn [750.141617ms] Nov 13 00:57:07.656: INFO: Created: latency-svc-6v9jf Nov 13 00:57:07.700: INFO: Got endpoints: latency-svc-nh9n5 [750.248953ms] Nov 13 00:57:07.706: INFO: Created: latency-svc-rzf26 Nov 13 00:57:07.750: INFO: Got endpoints: latency-svc-jvr6x [749.878643ms] Nov 13 00:57:07.755: INFO: Created: latency-svc-mnpqb Nov 13 00:57:07.800: INFO: Got endpoints: latency-svc-zh78c [749.552927ms] Nov 13 00:57:07.805: INFO: Created: latency-svc-vv826 Nov 13 00:57:07.901: INFO: Got endpoints: latency-svc-g7gv5 [801.282949ms] Nov 13 00:57:07.925: INFO: Created: latency-svc-vkmlj Nov 13 00:57:07.950: INFO: Got endpoints: latency-svc-pcxk7 [799.844864ms] Nov 13 00:57:07.955: INFO: Created: latency-svc-bggm5 Nov 13 00:57:08.000: INFO: Got endpoints: latency-svc-kppzf [799.190538ms] Nov 13 00:57:08.005: INFO: Created: latency-svc-n8trq Nov 13 00:57:08.050: INFO: Got endpoints: latency-svc-wfgqc [798.994995ms] Nov 13 00:57:08.056: INFO: Created: latency-svc-q5jm6 Nov 13 00:57:08.100: INFO: Got endpoints: latency-svc-245rx [800.973364ms] Nov 13 00:57:08.106: INFO: Created: latency-svc-g2km9 Nov 13 00:57:08.150: INFO: Got endpoints: latency-svc-sdjdv [800.509929ms] Nov 13 00:57:08.155: INFO: Created: latency-svc-dlnq6 Nov 13 00:57:08.200: INFO: Got endpoints: latency-svc-wf54t [800.572712ms] Nov 13 00:57:08.206: INFO: Created: latency-svc-r4tgp Nov 13 00:57:08.250: INFO: Got endpoints: latency-svc-cxcjj [799.325979ms] Nov 13 00:57:08.255: INFO: Created: latency-svc-6r6z2 Nov 13 00:57:08.299: INFO: Got endpoints: latency-svc-sqrrs [799.800667ms] Nov 13 00:57:08.305: INFO: Created: latency-svc-jcbhx Nov 13 00:57:08.351: INFO: Got endpoints: latency-svc-7q4gh [801.440428ms] Nov 13 00:57:08.357: INFO: Created: latency-svc-gd4f2 Nov 13 00:57:08.400: INFO: Got endpoints: latency-svc-nqccr [798.88076ms] Nov 13 00:57:08.406: INFO: Created: latency-svc-hx6jl Nov 13 00:57:08.450: INFO: Got endpoints: latency-svc-6v9jf [800.498159ms] Nov 13 00:57:08.456: INFO: Created: latency-svc-vt25t Nov 13 00:57:08.500: INFO: Got endpoints: latency-svc-rzf26 [800.290322ms] Nov 13 00:57:08.506: INFO: Created: latency-svc-vg5hd Nov 13 00:57:08.600: INFO: Got endpoints: latency-svc-mnpqb [850.009484ms] Nov 13 00:57:08.605: INFO: Created: latency-svc-7bll5 Nov 13 00:57:08.651: INFO: Got endpoints: latency-svc-vv826 [851.250694ms] Nov 13 00:57:08.657: INFO: Created: latency-svc-q2tlm Nov 13 00:57:08.700: INFO: Got endpoints: latency-svc-vkmlj [798.931938ms] Nov 13 00:57:08.706: INFO: Created: latency-svc-pwknb Nov 13 00:57:08.800: INFO: Got endpoints: latency-svc-bggm5 [850.270405ms] Nov 13 00:57:08.807: INFO: Created: latency-svc-6dh6k Nov 13 00:57:08.850: INFO: Got endpoints: latency-svc-n8trq [850.269646ms] Nov 13 00:57:08.856: INFO: Created: latency-svc-msxxx Nov 13 00:57:08.901: INFO: Got endpoints: latency-svc-q5jm6 [850.503314ms] Nov 13 00:57:08.906: INFO: Created: latency-svc-wfv2r Nov 13 00:57:08.950: INFO: Got endpoints: latency-svc-g2km9 [849.268946ms] Nov 13 00:57:08.956: INFO: Created: latency-svc-ttt5n Nov 13 00:57:09.000: INFO: Got endpoints: latency-svc-dlnq6 [850.455983ms] Nov 13 00:57:09.006: INFO: Created: latency-svc-qjmw8 Nov 13 00:57:09.051: INFO: Got endpoints: latency-svc-r4tgp [850.712385ms] Nov 13 00:57:09.056: INFO: Created: latency-svc-tmjx9 Nov 13 00:57:09.151: INFO: Got endpoints: latency-svc-6r6z2 [900.455306ms] Nov 13 00:57:09.156: INFO: Created: latency-svc-xdswt Nov 13 00:57:09.201: INFO: Got endpoints: latency-svc-jcbhx [901.049244ms] Nov 13 00:57:09.205: INFO: Created: latency-svc-z8qtf Nov 13 00:57:09.250: INFO: Got endpoints: latency-svc-gd4f2 [899.061842ms] Nov 13 00:57:09.256: INFO: Created: latency-svc-r8kg8 Nov 13 00:57:09.300: INFO: Got endpoints: latency-svc-hx6jl [899.825318ms] Nov 13 00:57:09.306: INFO: Created: latency-svc-v6qgz Nov 13 00:57:09.350: INFO: Got endpoints: latency-svc-vt25t [899.374227ms] Nov 13 00:57:09.354: INFO: Created: latency-svc-qjcns Nov 13 00:57:09.400: INFO: Got endpoints: latency-svc-vg5hd [899.716372ms] Nov 13 00:57:09.405: INFO: Created: latency-svc-5nr9w Nov 13 00:57:09.451: INFO: Got endpoints: latency-svc-7bll5 [850.676289ms] Nov 13 00:57:09.457: INFO: Created: latency-svc-m4rjm Nov 13 00:57:09.502: INFO: Got endpoints: latency-svc-q2tlm [850.624395ms] Nov 13 00:57:09.507: INFO: Created: latency-svc-6kkjp Nov 13 00:57:09.550: INFO: Got endpoints: latency-svc-pwknb [849.743288ms] Nov 13 00:57:09.555: INFO: Created: latency-svc-jw6vg Nov 13 00:57:09.601: INFO: Got endpoints: latency-svc-6dh6k [800.820778ms] Nov 13 00:57:09.607: INFO: Created: latency-svc-cvxpz Nov 13 00:57:09.651: INFO: Got endpoints: latency-svc-msxxx [800.688242ms] Nov 13 00:57:09.656: INFO: Created: latency-svc-72tqb Nov 13 00:57:09.700: INFO: Got endpoints: latency-svc-wfv2r [799.68054ms] Nov 13 00:57:09.706: INFO: Created: latency-svc-lxwjb Nov 13 00:57:09.750: INFO: Got endpoints: latency-svc-ttt5n [800.747107ms] Nov 13 00:57:09.756: INFO: Created: latency-svc-tq76x Nov 13 00:57:09.800: INFO: Got endpoints: latency-svc-qjmw8 [799.901452ms] Nov 13 00:57:09.806: INFO: Created: latency-svc-2w2xs Nov 13 00:57:09.850: INFO: Got endpoints: latency-svc-tmjx9 [799.120519ms] Nov 13 00:57:09.856: INFO: Created: latency-svc-v2xl9 Nov 13 00:57:09.901: INFO: Got endpoints: latency-svc-xdswt [750.133661ms] Nov 13 00:57:09.907: INFO: Created: latency-svc-x6tg4 Nov 13 00:57:09.950: INFO: Got endpoints: latency-svc-z8qtf [749.384559ms] Nov 13 00:57:09.955: INFO: Created: latency-svc-s47vn Nov 13 00:57:10.000: INFO: Got endpoints: latency-svc-r8kg8 [749.729643ms] Nov 13 00:57:10.012: INFO: Created: latency-svc-db7cp Nov 13 00:57:10.051: INFO: Got endpoints: latency-svc-v6qgz [750.827426ms] Nov 13 00:57:10.057: INFO: Created: latency-svc-vxpqm Nov 13 00:57:10.100: INFO: Got endpoints: latency-svc-qjcns [750.441395ms] Nov 13 00:57:10.106: INFO: Created: latency-svc-n8khd Nov 13 00:57:10.151: INFO: Got endpoints: latency-svc-5nr9w [750.810434ms] Nov 13 00:57:10.158: INFO: Created: latency-svc-d2c2h Nov 13 00:57:10.201: INFO: Got endpoints: latency-svc-m4rjm [750.174772ms] Nov 13 00:57:10.207: INFO: Created: latency-svc-tmfbm Nov 13 00:57:10.250: INFO: Got endpoints: latency-svc-6kkjp [748.585578ms] Nov 13 00:57:10.255: INFO: Created: latency-svc-ztt7t Nov 13 00:57:10.300: INFO: Got endpoints: latency-svc-jw6vg [750.460142ms] Nov 13 00:57:10.306: INFO: Created: latency-svc-h8k76 Nov 13 00:57:10.350: INFO: Got endpoints: latency-svc-cvxpz [748.691533ms] Nov 13 00:57:10.355: INFO: Created: latency-svc-jx9md Nov 13 00:57:10.401: INFO: Got endpoints: latency-svc-72tqb [749.336855ms] Nov 13 00:57:10.410: INFO: Created: latency-svc-v69sk Nov 13 00:57:10.451: INFO: Got endpoints: latency-svc-lxwjb [750.3289ms] Nov 13 00:57:10.457: INFO: Created: latency-svc-mlzft Nov 13 00:57:10.501: INFO: Got endpoints: latency-svc-tq76x [750.209306ms] Nov 13 00:57:10.507: INFO: Created: latency-svc-5f4qs Nov 13 00:57:10.550: INFO: Got endpoints: latency-svc-2w2xs [749.092657ms] Nov 13 00:57:10.555: INFO: Created: latency-svc-cv792 Nov 13 00:57:10.599: INFO: Got endpoints: latency-svc-v2xl9 [749.119067ms] Nov 13 00:57:10.605: INFO: Created: latency-svc-2xr4q Nov 13 00:57:10.650: INFO: Got endpoints: latency-svc-x6tg4 [748.764747ms] Nov 13 00:57:10.656: INFO: Created: latency-svc-vdnpj Nov 13 00:57:10.702: INFO: Got endpoints: latency-svc-s47vn [751.927721ms] Nov 13 00:57:10.707: INFO: Created: latency-svc-8rdkx Nov 13 00:57:10.750: INFO: Got endpoints: latency-svc-db7cp [749.586764ms] Nov 13 00:57:10.755: INFO: Created: latency-svc-gq7tk Nov 13 00:57:10.801: INFO: Got endpoints: latency-svc-vxpqm [749.846094ms] Nov 13 00:57:10.806: INFO: Created: latency-svc-9skxm Nov 13 00:57:10.850: INFO: Got endpoints: latency-svc-n8khd [749.616924ms] Nov 13 00:57:10.855: INFO: Created: latency-svc-g94nz Nov 13 00:57:10.900: INFO: Got endpoints: latency-svc-d2c2h [748.593658ms] Nov 13 00:57:10.905: INFO: Created: latency-svc-q96qm Nov 13 00:57:10.950: INFO: Got endpoints: latency-svc-tmfbm [749.246722ms] Nov 13 00:57:10.956: INFO: Created: latency-svc-b97nr Nov 13 00:57:11.001: INFO: Got endpoints: latency-svc-ztt7t [750.186183ms] Nov 13 00:57:11.051: INFO: Got endpoints: latency-svc-h8k76 [750.421675ms] Nov 13 00:57:11.099: INFO: Got endpoints: latency-svc-jx9md [749.375891ms] Nov 13 00:57:11.151: INFO: Got endpoints: latency-svc-v69sk [750.352254ms] Nov 13 00:57:11.199: INFO: Got endpoints: latency-svc-mlzft [748.445912ms] Nov 13 00:57:11.249: INFO: Got endpoints: latency-svc-5f4qs [748.557482ms] Nov 13 00:57:11.299: INFO: Got endpoints: latency-svc-cv792 [749.838985ms] Nov 13 00:57:11.350: INFO: Got endpoints: latency-svc-2xr4q [750.94184ms] Nov 13 00:57:11.400: INFO: Got endpoints: latency-svc-vdnpj [750.580126ms] Nov 13 00:57:11.450: INFO: Got endpoints: latency-svc-8rdkx [748.383826ms] Nov 13 00:57:11.501: INFO: Got endpoints: latency-svc-gq7tk [750.972881ms] Nov 13 00:57:11.550: INFO: Got endpoints: latency-svc-9skxm [748.51885ms] Nov 13 00:57:11.600: INFO: Got endpoints: latency-svc-g94nz [750.038435ms] Nov 13 00:57:11.650: INFO: Got endpoints: latency-svc-q96qm [750.321967ms] Nov 13 00:57:11.700: INFO: Got endpoints: latency-svc-b97nr [750.229964ms] Nov 13 00:57:11.701: INFO: Latencies: [8.263411ms 8.584131ms 10.662282ms 13.625802ms 16.19499ms 18.081988ms 22.325326ms 25.497878ms 28.999177ms 31.953447ms 34.032563ms 36.545253ms 42.727325ms 44.558822ms 46.585489ms 46.588957ms 46.70784ms 47.042068ms 47.276281ms 47.502728ms 47.654939ms 47.762362ms 48.037908ms 48.918236ms 49.085067ms 49.217787ms 49.753062ms 49.921664ms 50.000062ms 50.259798ms 51.91704ms 91.191353ms 138.697481ms 184.7789ms 232.174475ms 278.920879ms 324.477515ms 372.669067ms 417.959731ms 464.947724ms 512.906222ms 560.749989ms 608.157302ms 654.932382ms 703.420024ms 748.343854ms 748.383826ms 748.395928ms 748.421327ms 748.445912ms 748.470049ms 748.51885ms 748.557482ms 748.585578ms 748.593658ms 748.691533ms 748.700613ms 748.764747ms 748.854747ms 749.004659ms 749.070565ms 749.092657ms 749.119067ms 749.134157ms 749.205065ms 749.22112ms 749.246722ms 749.253183ms 749.308902ms 749.315845ms 749.317443ms 749.336855ms 749.375891ms 749.384559ms 749.41188ms 749.451428ms 749.480435ms 749.527191ms 749.552927ms 749.586764ms 749.594817ms 749.616924ms 749.647177ms 749.650686ms 749.6787ms 749.681088ms 749.711529ms 749.729643ms 749.838985ms 749.846094ms 749.853961ms 749.865473ms 749.878643ms 749.896412ms 749.973353ms 749.992457ms 750.012977ms 750.027959ms 750.038435ms 750.039089ms 750.084841ms 750.116399ms 750.123952ms 750.133661ms 750.136662ms 750.136681ms 750.141617ms 750.155561ms 750.174772ms 750.179659ms 750.186183ms 750.189955ms 750.197785ms 750.209306ms 750.222821ms 750.229964ms 750.239663ms 750.248953ms 750.321967ms 750.3289ms 750.338892ms 750.33915ms 750.352254ms 750.368613ms 750.383167ms 750.421675ms 750.441395ms 750.460142ms 750.534383ms 750.571678ms 750.580126ms 750.599831ms 750.600192ms 750.732421ms 750.789442ms 750.792884ms 750.810434ms 750.827426ms 750.852947ms 750.909191ms 750.94184ms 750.946895ms 750.953913ms 750.972881ms 751.000304ms 751.059271ms 751.216166ms 751.927721ms 798.295892ms 798.317154ms 798.88076ms 798.931938ms 798.994995ms 799.059186ms 799.120519ms 799.190538ms 799.199988ms 799.325979ms 799.420394ms 799.490907ms 799.515223ms 799.567978ms 799.68054ms 799.764158ms 799.800667ms 799.844864ms 799.901452ms 799.960073ms 800.057515ms 800.290322ms 800.478395ms 800.498159ms 800.509929ms 800.572712ms 800.657733ms 800.688242ms 800.747107ms 800.820778ms 800.973364ms 800.98692ms 801.250995ms 801.282949ms 801.440428ms 849.268946ms 849.743288ms 850.009484ms 850.269646ms 850.270405ms 850.455983ms 850.503314ms 850.624395ms 850.676289ms 850.712385ms 851.250694ms 899.061842ms 899.374227ms 899.716372ms 899.825318ms 900.455306ms 901.049244ms] Nov 13 00:57:11.701: INFO: 50 %ile: 750.084841ms Nov 13 00:57:11.701: INFO: 90 %ile: 801.250995ms Nov 13 00:57:11.701: INFO: 99 %ile: 900.455306ms Nov 13 00:57:11.701: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:11.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6114" for this suite. • [SLOW TEST:16.020 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":9,"skipped":52,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:06.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:56:06.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Nov 13 00:56:13.240: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-11-13T00:56:13Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-11-13T00:56:13Z]] name:name1 resourceVersion:62886 uid:b7ca4c5c-a3fa-42fc-9f0e-8bfa2dcf9085] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Nov 13 00:56:23.248: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-11-13T00:56:23Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-11-13T00:56:23Z]] name:name2 resourceVersion:63135 uid:7e68d443-4fe9-487d-8cdf-2788d7e86b30] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Nov 13 00:56:33.254: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-11-13T00:56:13Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-11-13T00:56:33Z]] name:name1 resourceVersion:63306 uid:b7ca4c5c-a3fa-42fc-9f0e-8bfa2dcf9085] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Nov 13 00:56:43.260: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-11-13T00:56:23Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-11-13T00:56:43Z]] name:name2 resourceVersion:63535 uid:7e68d443-4fe9-487d-8cdf-2788d7e86b30] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Nov 13 00:56:53.268: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-11-13T00:56:13Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-11-13T00:56:33Z]] name:name1 resourceVersion:63820 uid:b7ca4c5c-a3fa-42fc-9f0e-8bfa2dcf9085] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Nov 13 00:57:03.272: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-11-13T00:56:23Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-11-13T00:56:43Z]] name:name2 resourceVersion:64283 uid:7e68d443-4fe9-487d-8cdf-2788d7e86b30] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:13.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9580" for this suite. • [SLOW TEST:67.626 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":16,"skipped":341,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:11.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-1406aaab-cf6f-42b1-b39b-c8b669327928 STEP: Creating a pod to test consume secrets Nov 13 00:57:11.803: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-87c84687-a100-4ec0-b88f-cdae8ec9b3ce" in namespace "projected-5963" to be "Succeeded or Failed" Nov 13 00:57:11.807: INFO: Pod "pod-projected-secrets-87c84687-a100-4ec0-b88f-cdae8ec9b3ce": Phase="Pending", Reason="", readiness=false. Elapsed: 3.683657ms Nov 13 00:57:13.810: INFO: Pod "pod-projected-secrets-87c84687-a100-4ec0-b88f-cdae8ec9b3ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007159637s Nov 13 00:57:15.814: INFO: Pod "pod-projected-secrets-87c84687-a100-4ec0-b88f-cdae8ec9b3ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010713157s Nov 13 00:57:17.818: INFO: Pod "pod-projected-secrets-87c84687-a100-4ec0-b88f-cdae8ec9b3ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014912934s STEP: Saw pod success Nov 13 00:57:17.818: INFO: Pod "pod-projected-secrets-87c84687-a100-4ec0-b88f-cdae8ec9b3ce" satisfied condition "Succeeded or Failed" Nov 13 00:57:17.821: INFO: Trying to get logs from node node2 pod pod-projected-secrets-87c84687-a100-4ec0-b88f-cdae8ec9b3ce container projected-secret-volume-test: STEP: delete the pod Nov 13 00:57:17.832: INFO: Waiting for pod pod-projected-secrets-87c84687-a100-4ec0-b88f-cdae8ec9b3ce to disappear Nov 13 00:57:17.834: INFO: Pod pod-projected-secrets-87c84687-a100-4ec0-b88f-cdae8ec9b3ce no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:17.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5963" for this suite. • [SLOW TEST:6.073 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":77,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:17.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server Nov 13 00:57:17.867: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4992 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:17.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4992" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":11,"skipped":78,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:23.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-1034 Nov 13 00:55:23.848: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:25.851: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:27.853: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:55:29.854: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Nov 13 00:55:29.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1034 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Nov 13 00:55:30.198: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Nov 13 00:55:30.198: INFO: stdout: "iptables" Nov 13 00:55:30.198: INFO: proxyMode: iptables Nov 13 00:55:30.205: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 13 00:55:30.207: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-1034 STEP: creating replication controller affinity-clusterip-timeout in namespace services-1034 I1113 00:55:30.216855 29 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1034, replica count: 3 I1113 00:55:33.268298 29 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:55:36.270934 29 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 13 00:55:36.275: INFO: Creating new exec pod Nov 13 00:55:43.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1034 exec execpod-affinityqrbzz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Nov 13 00:55:43.794: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip-timeout 80\n+ echo hostName\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Nov 13 00:55:43.794: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 13 00:55:43.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1034 exec execpod-affinityqrbzz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.56.144 80' Nov 13 00:55:44.121: INFO: stderr: "+ nc -v -t -w 2 10.233.56.144 80\nConnection to 10.233.56.144 80 port [tcp/http] succeeded!\n+ echo hostName\n" Nov 13 00:55:44.121: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 13 00:55:44.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1034 exec execpod-affinityqrbzz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.56.144:80/ ; done' Nov 13 00:55:44.410: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n" Nov 13 00:55:44.410: INFO: stdout: "\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm\naffinity-clusterip-timeout-cxjzm" Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Received response from host: affinity-clusterip-timeout-cxjzm Nov 13 00:55:44.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1034 exec execpod-affinityqrbzz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.56.144:80/' Nov 13 00:55:44.896: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n" Nov 13 00:55:44.896: INFO: stdout: "affinity-clusterip-timeout-cxjzm" Nov 13 00:56:04.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1034 exec execpod-affinityqrbzz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.56.144:80/' Nov 13 00:56:05.188: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n" Nov 13 00:56:05.188: INFO: stdout: "affinity-clusterip-timeout-cxjzm" Nov 13 00:56:25.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1034 exec execpod-affinityqrbzz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.56.144:80/' Nov 13 00:56:25.426: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n" Nov 13 00:56:25.426: INFO: stdout: "affinity-clusterip-timeout-cxjzm" Nov 13 00:56:45.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1034 exec execpod-affinityqrbzz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.56.144:80/' Nov 13 00:56:46.721: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n" Nov 13 00:56:46.721: INFO: stdout: "affinity-clusterip-timeout-cxjzm" Nov 13 00:57:06.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1034 exec execpod-affinityqrbzz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.56.144:80/' Nov 13 00:57:06.989: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.56.144:80/\n" Nov 13 00:57:06.990: INFO: stdout: "affinity-clusterip-timeout-r4bxz" Nov 13 00:57:06.990: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-1034, will wait for the garbage collector to delete the pods Nov 13 00:57:07.056: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 3.998932ms Nov 13 00:57:07.157: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.420234ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:21.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1034" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:117.664 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":111,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:13.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-750/secret-test-85c8f565-e80e-43d9-985e-f675952f3c16 STEP: Creating a pod to test consume secrets Nov 13 00:57:13.850: INFO: Waiting up to 5m0s for pod "pod-configmaps-a80555a3-7d7c-4c88-8df2-eaa2f459858d" in namespace "secrets-750" to be "Succeeded or Failed" Nov 13 00:57:13.852: INFO: Pod "pod-configmaps-a80555a3-7d7c-4c88-8df2-eaa2f459858d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.952063ms Nov 13 00:57:15.856: INFO: Pod "pod-configmaps-a80555a3-7d7c-4c88-8df2-eaa2f459858d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006678425s Nov 13 00:57:17.861: INFO: Pod "pod-configmaps-a80555a3-7d7c-4c88-8df2-eaa2f459858d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010831229s Nov 13 00:57:19.864: INFO: Pod "pod-configmaps-a80555a3-7d7c-4c88-8df2-eaa2f459858d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014581714s Nov 13 00:57:21.867: INFO: Pod "pod-configmaps-a80555a3-7d7c-4c88-8df2-eaa2f459858d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017702205s Nov 13 00:57:23.871: INFO: Pod "pod-configmaps-a80555a3-7d7c-4c88-8df2-eaa2f459858d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.02155764s STEP: Saw pod success Nov 13 00:57:23.871: INFO: Pod "pod-configmaps-a80555a3-7d7c-4c88-8df2-eaa2f459858d" satisfied condition "Succeeded or Failed" Nov 13 00:57:23.873: INFO: Trying to get logs from node node2 pod pod-configmaps-a80555a3-7d7c-4c88-8df2-eaa2f459858d container env-test: STEP: delete the pod Nov 13 00:57:23.887: INFO: Waiting for pod pod-configmaps-a80555a3-7d7c-4c88-8df2-eaa2f459858d to disappear Nov 13 00:57:23.889: INFO: Pod pod-configmaps-a80555a3-7d7c-4c88-8df2-eaa2f459858d no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:23.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-750" for this suite. • [SLOW TEST:10.084 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":351,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:09.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 00:57:10.347: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 00:57:12.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361830, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361830, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361830, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361830, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:57:14.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361830, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361830, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361830, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361830, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 00:57:17.369: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:57:17.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5188-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:25.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8314" for this suite. STEP: Destroying namespace "webhook-8314-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.534 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":12,"skipped":204,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:23.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args Nov 13 00:57:23.947: INFO: Waiting up to 5m0s for pod "var-expansion-87823938-0a6c-4bf9-b8a6-3b6ab010ce9a" in namespace "var-expansion-1785" to be "Succeeded or Failed" Nov 13 00:57:23.950: INFO: Pod "var-expansion-87823938-0a6c-4bf9-b8a6-3b6ab010ce9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.704668ms Nov 13 00:57:25.955: INFO: Pod "var-expansion-87823938-0a6c-4bf9-b8a6-3b6ab010ce9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007866828s Nov 13 00:57:27.990: INFO: Pod "var-expansion-87823938-0a6c-4bf9-b8a6-3b6ab010ce9a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043307187s Nov 13 00:57:29.994: INFO: Pod "var-expansion-87823938-0a6c-4bf9-b8a6-3b6ab010ce9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046647924s STEP: Saw pod success Nov 13 00:57:29.994: INFO: Pod "var-expansion-87823938-0a6c-4bf9-b8a6-3b6ab010ce9a" satisfied condition "Succeeded or Failed" Nov 13 00:57:29.997: INFO: Trying to get logs from node node1 pod var-expansion-87823938-0a6c-4bf9-b8a6-3b6ab010ce9a container dapi-container: STEP: delete the pod Nov 13 00:57:30.010: INFO: Waiting for pod var-expansion-87823938-0a6c-4bf9-b8a6-3b6ab010ce9a to disappear Nov 13 00:57:30.012: INFO: Pod var-expansion-87823938-0a6c-4bf9-b8a6-3b6ab010ce9a no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:30.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1785" for this suite. • [SLOW TEST:6.107 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":356,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:18.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6893.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6893.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6893.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6893.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6893.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6893.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6893.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6893.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6893.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6893.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6893.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6893.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6893.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 163.62.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.62.163_udp@PTR;check="$$(dig +tcp +noall +answer +search 163.62.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.62.163_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6893.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6893.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6893.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6893.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6893.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6893.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6893.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6893.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6893.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6893.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6893.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6893.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6893.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 163.62.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.62.163_udp@PTR;check="$$(dig +tcp +noall +answer +search 163.62.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.62.163_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 13 00:57:26.061: INFO: Unable to read wheezy_udp@dns-test-service.dns-6893.svc.cluster.local from pod dns-6893/dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5: the server could not find the requested resource (get pods dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5) Nov 13 00:57:26.064: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6893.svc.cluster.local from pod dns-6893/dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5: the server could not find the requested resource (get pods dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5) Nov 13 00:57:26.066: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6893.svc.cluster.local from pod dns-6893/dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5: the server could not find the requested resource (get pods dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5) Nov 13 00:57:26.068: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6893.svc.cluster.local from pod dns-6893/dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5: the server could not find the requested resource (get pods dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5) Nov 13 00:57:26.085: INFO: Unable to read jessie_udp@dns-test-service.dns-6893.svc.cluster.local from pod dns-6893/dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5: the server could not find the requested resource (get pods dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5) Nov 13 00:57:26.087: INFO: Unable to read jessie_tcp@dns-test-service.dns-6893.svc.cluster.local from pod dns-6893/dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5: the server could not find the requested resource (get pods dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5) Nov 13 00:57:26.090: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6893.svc.cluster.local from pod dns-6893/dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5: the server could not find the requested resource (get pods dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5) Nov 13 00:57:26.092: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6893.svc.cluster.local from pod dns-6893/dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5: the server could not find the requested resource (get pods dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5) Nov 13 00:57:26.105: INFO: Lookups using dns-6893/dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5 failed for: [wheezy_udp@dns-test-service.dns-6893.svc.cluster.local wheezy_tcp@dns-test-service.dns-6893.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6893.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6893.svc.cluster.local jessie_udp@dns-test-service.dns-6893.svc.cluster.local jessie_tcp@dns-test-service.dns-6893.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6893.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6893.svc.cluster.local] Nov 13 00:57:31.158: INFO: DNS probes using dns-6893/dns-test-2b9dbc4b-9e7c-46df-9eb4-6e10f4566fd5 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:31.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6893" for this suite. • [SLOW TEST:13.179 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":12,"skipped":87,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:25.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments Nov 13 00:57:25.528: INFO: Waiting up to 5m0s for pod "client-containers-544b582f-6560-4461-804d-f7fcdd52e1d4" in namespace "containers-9842" to be "Succeeded or Failed" Nov 13 00:57:25.531: INFO: Pod "client-containers-544b582f-6560-4461-804d-f7fcdd52e1d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.461591ms Nov 13 00:57:27.535: INFO: Pod "client-containers-544b582f-6560-4461-804d-f7fcdd52e1d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006640811s Nov 13 00:57:29.539: INFO: Pod "client-containers-544b582f-6560-4461-804d-f7fcdd52e1d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010744823s Nov 13 00:57:31.544: INFO: Pod "client-containers-544b582f-6560-4461-804d-f7fcdd52e1d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015803653s STEP: Saw pod success Nov 13 00:57:31.544: INFO: Pod "client-containers-544b582f-6560-4461-804d-f7fcdd52e1d4" satisfied condition "Succeeded or Failed" Nov 13 00:57:31.547: INFO: Trying to get logs from node node1 pod client-containers-544b582f-6560-4461-804d-f7fcdd52e1d4 container agnhost-container: STEP: delete the pod Nov 13 00:57:31.559: INFO: Waiting for pod client-containers-544b582f-6560-4461-804d-f7fcdd52e1d4 to disappear Nov 13 00:57:31.561: INFO: Pod client-containers-544b582f-6560-4461-804d-f7fcdd52e1d4 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:31.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9842" for this suite. • [SLOW TEST:6.075 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":206,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:31.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates Nov 13 00:57:31.612: INFO: created test-podtemplate-1 Nov 13 00:57:31.615: INFO: created test-podtemplate-2 Nov 13 00:57:31.618: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Nov 13 00:57:31.620: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Nov 13 00:57:31.628: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:31.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-61" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":14,"skipped":213,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:31.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-737b82e7-04e8-4982-b4d8-2cf4f185742b STEP: Creating a pod to test consume configMaps Nov 13 00:57:31.695: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e9dbf5e3-1c91-4a9f-8b35-d6706d1a28a6" in namespace "projected-3096" to be "Succeeded or Failed" Nov 13 00:57:31.697: INFO: Pod "pod-projected-configmaps-e9dbf5e3-1c91-4a9f-8b35-d6706d1a28a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.696085ms Nov 13 00:57:33.700: INFO: Pod "pod-projected-configmaps-e9dbf5e3-1c91-4a9f-8b35-d6706d1a28a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005555351s Nov 13 00:57:35.705: INFO: Pod "pod-projected-configmaps-e9dbf5e3-1c91-4a9f-8b35-d6706d1a28a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010068454s STEP: Saw pod success Nov 13 00:57:35.705: INFO: Pod "pod-projected-configmaps-e9dbf5e3-1c91-4a9f-8b35-d6706d1a28a6" satisfied condition "Succeeded or Failed" Nov 13 00:57:35.708: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-e9dbf5e3-1c91-4a9f-8b35-d6706d1a28a6 container projected-configmap-volume-test: STEP: delete the pod Nov 13 00:57:35.723: INFO: Waiting for pod pod-projected-configmaps-e9dbf5e3-1c91-4a9f-8b35-d6706d1a28a6 to disappear Nov 13 00:57:35.725: INFO: Pod pod-projected-configmaps-e9dbf5e3-1c91-4a9f-8b35-d6706d1a28a6 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:35.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3096" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":222,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:09.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-6999 STEP: creating replication controller nodeport-test in namespace services-6999 I1113 00:55:10.007044 36 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6999, replica count: 2 I1113 00:55:13.057697 36 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:55:16.058711 36 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:55:19.065037 36 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:55:22.066358 36 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 13 00:55:22.066: INFO: Creating new exec pod Nov 13 00:55:31.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Nov 13 00:55:31.602: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Nov 13 00:55:31.602: INFO: stdout: "nodeport-test-k9r8q" Nov 13 00:55:31.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.56.34 80' Nov 13 00:55:31.867: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.56.34 80\nConnection to 10.233.56.34 80 port [tcp/http] succeeded!\n" Nov 13 00:55:31.867: INFO: stdout: "" Nov 13 00:55:32.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.56.34 80' Nov 13 00:55:33.278: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.56.34 80\nConnection to 10.233.56.34 80 port [tcp/http] succeeded!\n" Nov 13 00:55:33.278: INFO: stdout: "" Nov 13 00:55:33.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.56.34 80' Nov 13 00:55:34.240: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.56.34 80\nConnection to 10.233.56.34 80 port [tcp/http] succeeded!\n" Nov 13 00:55:34.240: INFO: stdout: "nodeport-test-frwlq" Nov 13 00:55:34.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:34.495: INFO: rc: 1 Nov 13 00:55:34.495: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:35.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:35.747: INFO: rc: 1 Nov 13 00:55:35.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:36.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:36.749: INFO: rc: 1 Nov 13 00:55:36.749: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:37.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:37.745: INFO: rc: 1 Nov 13 00:55:37.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:38.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:38.752: INFO: rc: 1 Nov 13 00:55:38.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:39.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:39.982: INFO: rc: 1 Nov 13 00:55:39.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:40.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:40.783: INFO: rc: 1 Nov 13 00:55:40.783: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:41.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:41.758: INFO: rc: 1 Nov 13 00:55:41.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:42.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:42.746: INFO: rc: 1 Nov 13 00:55:42.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:43.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:43.801: INFO: rc: 1 Nov 13 00:55:43.801: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:44.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:44.808: INFO: rc: 1 Nov 13 00:55:44.808: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:45.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:45.886: INFO: rc: 1 Nov 13 00:55:45.886: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:46.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:46.793: INFO: rc: 1 Nov 13 00:55:46.793: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:47.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:48.120: INFO: rc: 1 Nov 13 00:55:48.120: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:48.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:48.852: INFO: rc: 1 Nov 13 00:55:48.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:49.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:49.867: INFO: rc: 1 Nov 13 00:55:49.867: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:50.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:50.774: INFO: rc: 1 Nov 13 00:55:50.774: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:51.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:51.742: INFO: rc: 1 Nov 13 00:55:51.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:52.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:52.745: INFO: rc: 1 Nov 13 00:55:52.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:53.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:53.731: INFO: rc: 1 Nov 13 00:55:53.731: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:54.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:54.745: INFO: rc: 1 Nov 13 00:55:54.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:55.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:55.764: INFO: rc: 1 Nov 13 00:55:55.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:56.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:56.879: INFO: rc: 1 Nov 13 00:55:56.879: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:57.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:57.845: INFO: rc: 1 Nov 13 00:55:57.845: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:58.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:58.747: INFO: rc: 1 Nov 13 00:55:58.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:55:59.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:55:59.753: INFO: rc: 1 Nov 13 00:55:59.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:00.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:00.748: INFO: rc: 1 Nov 13 00:56:00.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:01.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:01.942: INFO: rc: 1 Nov 13 00:56:01.942: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:02.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:02.814: INFO: rc: 1 Nov 13 00:56:02.814: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:03.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:03.759: INFO: rc: 1 Nov 13 00:56:03.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:04.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:04.782: INFO: rc: 1 Nov 13 00:56:04.782: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:05.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:05.764: INFO: rc: 1 Nov 13 00:56:05.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:06.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:06.896: INFO: rc: 1 Nov 13 00:56:06.896: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:07.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:07.841: INFO: rc: 1 Nov 13 00:56:07.841: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:08.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:08.748: INFO: rc: 1 Nov 13 00:56:08.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:09.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:09.760: INFO: rc: 1 Nov 13 00:56:09.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:10.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:10.778: INFO: rc: 1 Nov 13 00:56:10.778: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:11.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:11.774: INFO: rc: 1 Nov 13 00:56:11.774: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:12.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:12.743: INFO: rc: 1 Nov 13 00:56:12.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:13.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:13.744: INFO: rc: 1 Nov 13 00:56:13.744: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:14.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:14.739: INFO: rc: 1 Nov 13 00:56:14.739: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:15.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:15.741: INFO: rc: 1 Nov 13 00:56:15.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:16.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:16.756: INFO: rc: 1 Nov 13 00:56:16.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:17.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:17.761: INFO: rc: 1 Nov 13 00:56:17.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:18.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:18.743: INFO: rc: 1 Nov 13 00:56:18.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:19.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:19.812: INFO: rc: 1 Nov 13 00:56:19.812: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:20.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:20.960: INFO: rc: 1 Nov 13 00:56:20.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:21.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:21.761: INFO: rc: 1 Nov 13 00:56:21.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:22.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:22.896: INFO: rc: 1 Nov 13 00:56:22.896: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:23.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:23.742: INFO: rc: 1 Nov 13 00:56:23.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:24.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:24.748: INFO: rc: 1 Nov 13 00:56:24.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32019 + echo hostName nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:25.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:25.720: INFO: rc: 1 Nov 13 00:56:25.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:26.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:27.401: INFO: rc: 1 Nov 13 00:56:27.401: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:27.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:28.006: INFO: rc: 1 Nov 13 00:56:28.006: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:28.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:28.752: INFO: rc: 1 Nov 13 00:56:28.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:29.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:29.742: INFO: rc: 1 Nov 13 00:56:29.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32019 + echo hostName nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:30.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:30.748: INFO: rc: 1 Nov 13 00:56:30.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:31.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:31.760: INFO: rc: 1 Nov 13 00:56:31.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:32.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:32.735: INFO: rc: 1 Nov 13 00:56:32.735: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:33.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:33.768: INFO: rc: 1 Nov 13 00:56:33.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32019 + echo hostName nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:34.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:34.747: INFO: rc: 1 Nov 13 00:56:34.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:35.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:35.751: INFO: rc: 1 Nov 13 00:56:35.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32019 + echo hostName nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:36.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:36.758: INFO: rc: 1 Nov 13 00:56:36.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:37.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:37.743: INFO: rc: 1 Nov 13 00:56:37.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:38.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:38.752: INFO: rc: 1 Nov 13 00:56:38.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:39.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:39.852: INFO: rc: 1 Nov 13 00:56:39.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:40.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:40.946: INFO: rc: 1 Nov 13 00:56:40.946: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:41.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:41.871: INFO: rc: 1 Nov 13 00:56:41.871: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:42.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:42.735: INFO: rc: 1 Nov 13 00:56:42.735: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32019 + echo hostName nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:43.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:43.997: INFO: rc: 1 Nov 13 00:56:43.997: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:44.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:44.764: INFO: rc: 1 Nov 13 00:56:44.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:45.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:45.915: INFO: rc: 1 Nov 13 00:56:45.915: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:46.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:46.783: INFO: rc: 1 Nov 13 00:56:46.783: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:47.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:47.756: INFO: rc: 1 Nov 13 00:56:47.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:48.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:48.790: INFO: rc: 1 Nov 13 00:56:48.790: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:49.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:49.751: INFO: rc: 1 Nov 13 00:56:49.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:50.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:50.752: INFO: rc: 1 Nov 13 00:56:50.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:51.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:51.758: INFO: rc: 1 Nov 13 00:56:51.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:52.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:53.006: INFO: rc: 1 Nov 13 00:56:53.006: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:53.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:53.751: INFO: rc: 1 Nov 13 00:56:53.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:54.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:54.882: INFO: rc: 1 Nov 13 00:56:54.882: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:55.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:55.868: INFO: rc: 1 Nov 13 00:56:55.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:56.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:57.064: INFO: rc: 1 Nov 13 00:56:57.064: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:57.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:57.788: INFO: rc: 1 Nov 13 00:56:57.788: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:58.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:58.743: INFO: rc: 1 Nov 13 00:56:58.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:56:59.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:56:59.732: INFO: rc: 1 Nov 13 00:56:59.732: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:00.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:00.750: INFO: rc: 1 Nov 13 00:57:00.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:01.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:01.765: INFO: rc: 1 Nov 13 00:57:01.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:02.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:02.807: INFO: rc: 1 Nov 13 00:57:02.807: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:03.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:03.740: INFO: rc: 1 Nov 13 00:57:03.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:04.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:05.026: INFO: rc: 1 Nov 13 00:57:05.026: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:05.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:05.970: INFO: rc: 1 Nov 13 00:57:05.970: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:06.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:07.320: INFO: rc: 1 Nov 13 00:57:07.320: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:07.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:07.761: INFO: rc: 1 Nov 13 00:57:07.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:08.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:08.909: INFO: rc: 1 Nov 13 00:57:08.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:09.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:10.467: INFO: rc: 1 Nov 13 00:57:10.467: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:10.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:11.948: INFO: rc: 1 Nov 13 00:57:11.948: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:12.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:12.878: INFO: rc: 1 Nov 13 00:57:12.878: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:13.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:14.139: INFO: rc: 1 Nov 13 00:57:14.140: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:14.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:14.755: INFO: rc: 1 Nov 13 00:57:14.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:15.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:15.741: INFO: rc: 1 Nov 13 00:57:15.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:16.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:16.753: INFO: rc: 1 Nov 13 00:57:16.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:17.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:17.748: INFO: rc: 1 Nov 13 00:57:17.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:18.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:18.875: INFO: rc: 1 Nov 13 00:57:18.875: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:19.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:19.747: INFO: rc: 1 Nov 13 00:57:19.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:20.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:20.750: INFO: rc: 1 Nov 13 00:57:20.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:21.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:21.745: INFO: rc: 1 Nov 13 00:57:21.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:22.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:22.803: INFO: rc: 1 Nov 13 00:57:22.803: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:23.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:24.088: INFO: rc: 1 Nov 13 00:57:24.088: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:24.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:25.002: INFO: rc: 1 Nov 13 00:57:25.002: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:25.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:26.116: INFO: rc: 1 Nov 13 00:57:26.116: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + + ncecho -v -t -w hostName 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:26.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:26.982: INFO: rc: 1 Nov 13 00:57:26.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:27.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:27.761: INFO: rc: 1 Nov 13 00:57:27.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:28.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:28.923: INFO: rc: 1 Nov 13 00:57:28.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:29.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:30.086: INFO: rc: 1 Nov 13 00:57:30.086: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:30.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:31.072: INFO: rc: 1 Nov 13 00:57:31.072: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:31.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:31.951: INFO: rc: 1 Nov 13 00:57:31.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:32.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:32.900: INFO: rc: 1 Nov 13 00:57:32.900: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:33.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:33.763: INFO: rc: 1 Nov 13 00:57:33.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:34.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:34.766: INFO: rc: 1 Nov 13 00:57:34.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32019 + echo hostName nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:34.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019' Nov 13 00:57:35.027: INFO: rc: 1 Nov 13 00:57:35.027: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpodbwk7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32019: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32019 nc: connect to 10.10.190.207 port 32019 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:35.028: FAIL: Unexpected error: <*errors.errorString | 0xc00410cc20>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32019 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32019 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000183680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000183680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000183680, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-6999". STEP: Found 17 events. Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:10 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-k9r8q Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:10 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-frwlq Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:10 +0000 UTC - event for nodeport-test-frwlq: {default-scheduler } Scheduled: Successfully assigned services-6999/nodeport-test-frwlq to node2 Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:10 +0000 UTC - event for nodeport-test-k9r8q: {default-scheduler } Scheduled: Successfully assigned services-6999/nodeport-test-k9r8q to node2 Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:13 +0000 UTC - event for nodeport-test-frwlq: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:13 +0000 UTC - event for nodeport-test-k9r8q: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:16 +0000 UTC - event for nodeport-test-frwlq: {kubelet node2} Started: Started container nodeport-test Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:16 +0000 UTC - event for nodeport-test-frwlq: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 3.006280466s Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:16 +0000 UTC - event for nodeport-test-frwlq: {kubelet node2} Created: Created container nodeport-test Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:16 +0000 UTC - event for nodeport-test-k9r8q: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 2.714444621s Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:16 +0000 UTC - event for nodeport-test-k9r8q: {kubelet node2} Created: Created container nodeport-test Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:16 +0000 UTC - event for nodeport-test-k9r8q: {kubelet node2} Started: Started container nodeport-test Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:22 +0000 UTC - event for execpodbwk7w: {default-scheduler } Scheduled: Successfully assigned services-6999/execpodbwk7w to node1 Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:24 +0000 UTC - event for execpodbwk7w: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:27 +0000 UTC - event for execpodbwk7w: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 3.079785358s Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:27 +0000 UTC - event for execpodbwk7w: {kubelet node1} Created: Created container agnhost-container Nov 13 00:57:35.044: INFO: At 2021-11-13 00:55:28 +0000 UTC - event for execpodbwk7w: {kubelet node1} Started: Started container agnhost-container Nov 13 00:57:35.048: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 00:57:35.048: INFO: execpodbwk7w node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:55:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:55:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:55:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:55:22 +0000 UTC }] Nov 13 00:57:35.048: INFO: nodeport-test-frwlq node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:55:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:55:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:55:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:55:10 +0000 UTC }] Nov 13 00:57:35.048: INFO: nodeport-test-k9r8q node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:55:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:55:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:55:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:55:10 +0000 UTC }] Nov 13 00:57:35.048: INFO: Nov 13 00:57:35.052: INFO: Logging node info for node master1 Nov 13 00:57:35.056: INFO: Node Info: &Node{ObjectMeta:{master1 56d66c54-e52b-494a-a758-e4b658c4b245 66379 0 2021-11-12 21:05:50 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:28 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:28 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:28 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 00:57:28 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 00:57:35.056: INFO: Logging kubelet events for node master1 Nov 13 00:57:35.058: INFO: Logging pods the kubelet thinks is on node master1 Nov 13 00:57:35.101: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.101: INFO: Container coredns ready: true, restart count 2 Nov 13 00:57:35.101: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 00:57:35.101: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 00:57:35.101: INFO: Container node-exporter ready: true, restart count 0 Nov 13 00:57:35.101: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.101: INFO: Container kube-scheduler ready: true, restart count 0 Nov 13 00:57:35.101: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.101: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 00:57:35.101: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 00:57:35.101: INFO: Init container install-cni ready: true, restart count 0 Nov 13 00:57:35.101: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 00:57:35.101: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.101: INFO: Container kube-multus ready: true, restart count 1 Nov 13 00:57:35.101: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded) Nov 13 00:57:35.101: INFO: Container docker-registry ready: true, restart count 0 Nov 13 00:57:35.101: INFO: Container nginx ready: true, restart count 0 Nov 13 00:57:35.101: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.101: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 00:57:35.101: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.101: INFO: Container kube-proxy ready: true, restart count 1 W1113 00:57:35.117309 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 00:57:35.194: INFO: Latency metrics for node master1 Nov 13 00:57:35.194: INFO: Logging node info for node master2 Nov 13 00:57:35.196: INFO: Node Info: &Node{ObjectMeta:{master2 9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 66291 0 2021-11-12 21:06:20 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:25 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:25 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:25 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 00:57:25 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 00:57:35.197: INFO: Logging kubelet events for node master2 Nov 13 00:57:35.199: INFO: Logging pods the kubelet thinks is on node master2 Nov 13 00:57:35.216: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.216: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 00:57:35.216: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.216: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 00:57:35.216: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 00:57:35.216: INFO: Init container install-cni ready: true, restart count 0 Nov 13 00:57:35.216: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 00:57:35.216: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.216: INFO: Container kube-multus ready: true, restart count 1 Nov 13 00:57:35.216: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.216: INFO: Container coredns ready: true, restart count 1 Nov 13 00:57:35.216: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 00:57:35.216: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 00:57:35.216: INFO: Container node-exporter ready: true, restart count 0 Nov 13 00:57:35.216: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.216: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 00:57:35.216: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.216: INFO: Container nfd-controller ready: true, restart count 0 Nov 13 00:57:35.216: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.216: INFO: Container kube-apiserver ready: true, restart count 0 W1113 00:57:35.229478 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 00:57:35.296: INFO: Latency metrics for node master2 Nov 13 00:57:35.296: INFO: Logging node info for node master3 Nov 13 00:57:35.298: INFO: Node Info: &Node{ObjectMeta:{master3 fce0cd54-e4d8-4ce1-b720-522aad2d7989 66406 0 2021-11-12 21:06:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:29 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:29 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:29 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 00:57:29 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 00:57:35.299: INFO: Logging kubelet events for node master3 Nov 13 00:57:35.301: INFO: Logging pods the kubelet thinks is on node master3 Nov 13 00:57:35.316: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.316: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 00:57:35.316: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 00:57:35.316: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 00:57:35.316: INFO: Container node-exporter ready: true, restart count 0 Nov 13 00:57:35.316: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.316: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 00:57:35.316: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.316: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 13 00:57:35.316: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.316: INFO: Container kube-multus ready: true, restart count 1 Nov 13 00:57:35.316: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.316: INFO: Container autoscaler ready: true, restart count 1 Nov 13 00:57:35.316: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.316: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 00:57:35.316: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 00:57:35.316: INFO: Init container install-cni ready: true, restart count 0 Nov 13 00:57:35.316: INFO: Container kube-flannel ready: true, restart count 1 W1113 00:57:35.332821 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 00:57:35.397: INFO: Latency metrics for node master3 Nov 13 00:57:35.397: INFO: Logging node info for node node1 Nov 13 00:57:35.402: INFO: Node Info: &Node{ObjectMeta:{node1 6ceb907c-9809-4d18-88c6-b1e10ba80f97 66528 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:31 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:31 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:31 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 00:57:31 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 00:57:35.403: INFO: Logging kubelet events for node node1 Nov 13 00:57:35.405: INFO: Logging pods the kubelet thinks is on node node1 Nov 13 00:57:35.423: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 00:57:35.423: INFO: Container collectd ready: true, restart count 0 Nov 13 00:57:35.423: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 00:57:35.423: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 00:57:35.423: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 00:57:35.423: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 00:57:35.423: INFO: affinity-nodeport-8ms79 started at 2021-11-13 00:57:21 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container affinity-nodeport ready: true, restart count 0 Nov 13 00:57:35.423: INFO: pod-d50f9f24-af26-4d3d-baf3-99921e3d614c started at 2021-11-13 00:57:31 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container test-container ready: false, restart count 0 Nov 13 00:57:35.423: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded) Nov 13 00:57:35.423: INFO: Container discover ready: false, restart count 0 Nov 13 00:57:35.423: INFO: Container init ready: false, restart count 0 Nov 13 00:57:35.423: INFO: Container install ready: false, restart count 0 Nov 13 00:57:35.423: INFO: affinity-nodeport-98pq9 started at 2021-11-13 00:57:21 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container affinity-nodeport ready: true, restart count 0 Nov 13 00:57:35.423: INFO: execpodbwk7w started at 2021-11-13 00:55:22 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 00:57:35.423: INFO: ss2-1 started at 2021-11-13 00:57:27 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container webserver ready: false, restart count 0 Nov 13 00:57:35.423: INFO: affinity-nodeport-transition-d9vns started at 2021-11-13 00:57:30 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container affinity-nodeport-transition ready: false, restart count 0 Nov 13 00:57:35.423: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 00:57:35.423: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 00:57:35.423: INFO: ss2-0 started at 2021-11-13 00:57:07 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container webserver ready: true, restart count 0 Nov 13 00:57:35.423: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded) Nov 13 00:57:35.423: INFO: Container config-reloader ready: true, restart count 0 Nov 13 00:57:35.423: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 00:57:35.423: INFO: Container grafana ready: true, restart count 0 Nov 13 00:57:35.423: INFO: Container prometheus ready: true, restart count 1 Nov 13 00:57:35.423: INFO: affinity-nodeport-transition-m7np5 started at 2021-11-13 00:57:30 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container affinity-nodeport-transition ready: false, restart count 0 Nov 13 00:57:35.423: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded) Nov 13 00:57:35.423: INFO: Container nodereport ready: true, restart count 0 Nov 13 00:57:35.423: INFO: Container reconcile ready: true, restart count 0 Nov 13 00:57:35.423: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 00:57:35.423: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 00:57:35.423: INFO: Container node-exporter ready: true, restart count 0 Nov 13 00:57:35.423: INFO: affinity-nodeport-dnwfb started at 2021-11-13 00:57:21 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container affinity-nodeport ready: true, restart count 0 Nov 13 00:57:35.423: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 00:57:35.423: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Init container install-cni ready: true, restart count 2 Nov 13 00:57:35.423: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 00:57:35.423: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.423: INFO: Container kube-multus ready: true, restart count 1 Nov 13 00:57:35.423: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded) Nov 13 00:57:35.423: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 00:57:35.423: INFO: Container prometheus-operator ready: true, restart count 0 W1113 00:57:35.438173 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 00:57:35.693: INFO: Latency metrics for node node1 Nov 13 00:57:35.693: INFO: Logging node info for node node2 Nov 13 00:57:35.696: INFO: Node Info: &Node{ObjectMeta:{node2 652722dd-12b1-4529-ba4d-a00c590e4a68 66554 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-12 21:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:34 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:34 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 00:57:34 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 00:57:34 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 00:57:35.696: INFO: Logging kubelet events for node node2 Nov 13 00:57:35.699: INFO: Logging pods the kubelet thinks is on node node2 Nov 13 00:57:35.714: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded) Nov 13 00:57:35.714: INFO: Container nodereport ready: true, restart count 0 Nov 13 00:57:35.714: INFO: Container reconcile ready: true, restart count 0 Nov 13 00:57:35.714: INFO: execpod-affinitytf9zk started at 2021-11-13 00:57:30 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 00:57:35.714: INFO: ss2-0 started at 2021-11-13 00:56:36 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container webserver ready: true, restart count 0 Nov 13 00:57:35.714: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded) Nov 13 00:57:35.714: INFO: Container discover ready: false, restart count 0 Nov 13 00:57:35.714: INFO: Container init ready: false, restart count 0 Nov 13 00:57:35.714: INFO: Container install ready: false, restart count 0 Nov 13 00:57:35.714: INFO: liveness-c005be3f-6dff-4372-9204-1a2f6b46fd80 started at 2021-11-13 00:56:40 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container agnhost-container ready: true, restart count 2 Nov 13 00:57:35.714: INFO: forbid-27279415-nj4vx started at 2021-11-13 00:55:00 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container c ready: true, restart count 0 Nov 13 00:57:35.714: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 00:57:35.714: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 00:57:35.714: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 00:57:35.714: INFO: Container node-exporter ready: true, restart count 0 Nov 13 00:57:35.714: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container tas-extender ready: true, restart count 0 Nov 13 00:57:35.714: INFO: nodeport-test-frwlq started at 2021-11-13 00:55:10 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container nodeport-test ready: true, restart count 0 Nov 13 00:57:35.714: INFO: ss2-2 started at 2021-11-13 00:57:22 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container webserver ready: true, restart count 0 Nov 13 00:57:35.714: INFO: affinity-nodeport-transition-sf4hv started at 2021-11-13 00:57:30 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Nov 13 00:57:35.714: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 00:57:35.714: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 00:57:35.714: INFO: var-expansion-fd0d269e-2699-419c-8402-baf8afcaaa8e started at 2021-11-13 00:55:23 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container dapi-container ready: true, restart count 0 Nov 13 00:57:35.714: INFO: pod-projected-configmaps-e9dbf5e3-1c91-4a9f-8b35-d6706d1a28a6 started at 2021-11-13 00:57:31 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container projected-configmap-volume-test ready: false, restart count 0 Nov 13 00:57:35.714: INFO: nodeport-test-k9r8q started at 2021-11-13 00:55:10 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container nodeport-test ready: true, restart count 0 Nov 13 00:57:35.714: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 00:57:35.714: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Init container install-cni ready: true, restart count 2 Nov 13 00:57:35.714: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 00:57:35.714: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container kube-multus ready: true, restart count 1 Nov 13 00:57:35.714: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 00:57:35.714: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 00:57:35.714: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 00:57:35.714: INFO: Container collectd ready: true, restart count 0 Nov 13 00:57:35.714: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 00:57:35.714: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 00:57:35.714: INFO: ss2-2 started at 2021-11-13 00:57:11 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container webserver ready: true, restart count 0 Nov 13 00:57:35.714: INFO: ss2-1 started at 2021-11-13 00:57:12 +0000 UTC (0+1 container statuses recorded) Nov 13 00:57:35.714: INFO: Container webserver ready: true, restart count 0 W1113 00:57:35.728236 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 00:57:35.997: INFO: Latency metrics for node node2 Nov 13 00:57:35.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6999" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [146.032 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:57:35.028: Unexpected error: <*errors.errorString | 0xc00410cc20>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32019 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32019 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":5,"skipped":83,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:35.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-854e4197-302a-478a-aa6a-fdeed87a4cd9 STEP: Creating a pod to test consume configMaps Nov 13 00:57:35.781: INFO: Waiting up to 5m0s for pod "pod-configmaps-26395693-242b-4140-a490-036566ff2b61" in namespace "configmap-3407" to be "Succeeded or Failed" Nov 13 00:57:35.786: INFO: Pod "pod-configmaps-26395693-242b-4140-a490-036566ff2b61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.947671ms Nov 13 00:57:37.789: INFO: Pod "pod-configmaps-26395693-242b-4140-a490-036566ff2b61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007934636s Nov 13 00:57:39.794: INFO: Pod "pod-configmaps-26395693-242b-4140-a490-036566ff2b61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012716567s STEP: Saw pod success Nov 13 00:57:39.794: INFO: Pod "pod-configmaps-26395693-242b-4140-a490-036566ff2b61" satisfied condition "Succeeded or Failed" Nov 13 00:57:39.796: INFO: Trying to get logs from node node2 pod pod-configmaps-26395693-242b-4140-a490-036566ff2b61 container agnhost-container: STEP: delete the pod Nov 13 00:57:39.811: INFO: Waiting for pod pod-configmaps-26395693-242b-4140-a490-036566ff2b61 to disappear Nov 13 00:57:39.813: INFO: Pod pod-configmaps-26395693-242b-4140-a490-036566ff2b61 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:39.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3407" for this suite. • ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:31.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Nov 13 00:57:31.284: INFO: Waiting up to 5m0s for pod "pod-d50f9f24-af26-4d3d-baf3-99921e3d614c" in namespace "emptydir-9347" to be "Succeeded or Failed" Nov 13 00:57:31.288: INFO: Pod "pod-d50f9f24-af26-4d3d-baf3-99921e3d614c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.92169ms Nov 13 00:57:33.291: INFO: Pod "pod-d50f9f24-af26-4d3d-baf3-99921e3d614c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006633163s Nov 13 00:57:35.295: INFO: Pod "pod-d50f9f24-af26-4d3d-baf3-99921e3d614c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011115809s Nov 13 00:57:37.299: INFO: Pod "pod-d50f9f24-af26-4d3d-baf3-99921e3d614c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014395866s Nov 13 00:57:39.304: INFO: Pod "pod-d50f9f24-af26-4d3d-baf3-99921e3d614c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019656506s Nov 13 00:57:41.307: INFO: Pod "pod-d50f9f24-af26-4d3d-baf3-99921e3d614c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.02281682s STEP: Saw pod success Nov 13 00:57:41.307: INFO: Pod "pod-d50f9f24-af26-4d3d-baf3-99921e3d614c" satisfied condition "Succeeded or Failed" Nov 13 00:57:41.309: INFO: Trying to get logs from node node1 pod pod-d50f9f24-af26-4d3d-baf3-99921e3d614c container test-container: STEP: delete the pod Nov 13 00:57:41.322: INFO: Waiting for pod pod-d50f9f24-af26-4d3d-baf3-99921e3d614c to disappear Nov 13 00:57:41.324: INFO: Pod pod-d50f9f24-af26-4d3d-baf3-99921e3d614c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:41.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9347" for this suite. • [SLOW TEST:10.080 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":114,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":226,"failed":0} [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:39.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:57:39.859: INFO: Waiting up to 5m0s for pod "busybox-user-65534-b769a29f-008f-4576-9e09-382001e3e1a9" in namespace "security-context-test-3634" to be "Succeeded or Failed" Nov 13 00:57:39.861: INFO: Pod "busybox-user-65534-b769a29f-008f-4576-9e09-382001e3e1a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179881ms Nov 13 00:57:41.864: INFO: Pod "busybox-user-65534-b769a29f-008f-4576-9e09-382001e3e1a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004490007s Nov 13 00:57:43.867: INFO: Pod "busybox-user-65534-b769a29f-008f-4576-9e09-382001e3e1a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00811532s Nov 13 00:57:43.867: INFO: Pod "busybox-user-65534-b769a29f-008f-4576-9e09-382001e3e1a9" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:43.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3634" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":226,"failed":0} SSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:43.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info Nov 13 00:57:43.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9407 cluster-info' Nov 13 00:57:44.087: INFO: stderr: "" Nov 13 00:57:44.087: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:44.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9407" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":18,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:44.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:57:44.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7989" for this suite. • ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:36.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-8355 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8355 STEP: creating replication controller externalsvc in namespace services-8355 I1113 00:57:36.098210 36 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8355, replica count: 2 I1113 00:57:39.149606 36 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:57:42.150859 36 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Nov 13 00:57:42.165: INFO: Creating new exec pod Nov 13 00:57:48.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8355 exec execpodk6cvz -- /bin/sh -x -c nslookup nodeport-service.services-8355.svc.cluster.local' Nov 13 00:57:48.448: INFO: stderr: "+ nslookup nodeport-service.services-8355.svc.cluster.local\n" Nov 13 00:57:48.448: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-8355.svc.cluster.local\tcanonical name = externalsvc.services-8355.svc.cluster.local.\nName:\texternalsvc.services-8355.svc.cluster.local\nAddress: 10.233.26.37\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8355, will wait for the garbage collector to delete the pods Nov 13 00:57:48.505: INFO: Deleting ReplicationController externalsvc took: 4.220308ms Nov 13 00:57:48.606: INFO: Terminating ReplicationController externalsvc pods took: 101.259763ms Nov 13 00:58:01.423: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:01.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8355" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:25.382 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":6,"skipped":104,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:01.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Nov 13 00:58:01.479: INFO: The status of Pod pod-update-activedeadlineseconds-e0bedc05-b3b6-41a5-a422-1d68c3bc722e is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:58:03.482: INFO: The status of Pod pod-update-activedeadlineseconds-e0bedc05-b3b6-41a5-a422-1d68c3bc722e is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:58:05.483: INFO: The status of Pod pod-update-activedeadlineseconds-e0bedc05-b3b6-41a5-a422-1d68c3bc722e is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Nov 13 00:58:06.001: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e0bedc05-b3b6-41a5-a422-1d68c3bc722e" Nov 13 00:58:06.001: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e0bedc05-b3b6-41a5-a422-1d68c3bc722e" in namespace "pods-6617" to be "terminated due to deadline exceeded" Nov 13 00:58:06.003: INFO: Pod "pod-update-activedeadlineseconds-e0bedc05-b3b6-41a5-a422-1d68c3bc722e": Phase="Running", Reason="", readiness=true. Elapsed: 2.197978ms Nov 13 00:58:08.008: INFO: Pod "pod-update-activedeadlineseconds-e0bedc05-b3b6-41a5-a422-1d68c3bc722e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.006656317s Nov 13 00:58:08.008: INFO: Pod "pod-update-activedeadlineseconds-e0bedc05-b3b6-41a5-a422-1d68c3bc722e" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:08.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6617" for this suite. • [SLOW TEST:6.572 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":106,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:08.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-089ac4d5-ac53-404a-9cbb-e14236d211b1 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:08.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4906" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":8,"skipped":133,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:55:23.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod Nov 13 00:57:24.339: INFO: Successfully updated pod "var-expansion-fd0d269e-2699-419c-8402-baf8afcaaa8e" STEP: waiting for pod running STEP: deleting the pod gracefully Nov 13 00:57:28.346: INFO: Deleting pod "var-expansion-fd0d269e-2699-419c-8402-baf8afcaaa8e" in namespace "var-expansion-8415" Nov 13 00:57:28.350: INFO: Wait up to 5m0s for pod "var-expansion-fd0d269e-2699-419c-8402-baf8afcaaa8e" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:12.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8415" for this suite. • [SLOW TEST:168.578 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":6,"skipped":105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":19,"skipped":249,"failed":0} [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:44.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-505 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 00:57:44.225: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 00:57:44.256: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:57:46.259: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:57:48.261: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:57:50.259: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:57:52.260: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:57:54.265: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:57:56.263: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:57:58.263: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:58:00.259: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:58:02.260: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 00:58:04.261: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 00:58:04.267: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 00:58:06.274: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 00:58:10.309: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Nov 13 00:58:10.309: INFO: Going to poll 10.244.3.86 on port 8081 at least 0 times, with a maximum of 34 tries before failing Nov 13 00:58:10.312: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.86 8081 | grep -v '^\s*$'] Namespace:pod-network-test-505 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 00:58:10.312: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:58:11.408: INFO: Found all 1 expected endpoints: [netserver-0] Nov 13 00:58:11.408: INFO: Going to poll 10.244.4.26 on port 8081 at least 0 times, with a maximum of 34 tries before failing Nov 13 00:58:11.411: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.26 8081 | grep -v '^\s*$'] Namespace:pod-network-test-505 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 00:58:11.411: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:58:12.505: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:12.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-505" for this suite. • [SLOW TEST:28.311 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":249,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:08.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 00:58:08.448: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 00:58:10.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361888, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361888, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 00:58:13.470: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:13.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4978" for this suite. STEP: Destroying namespace "webhook-4978-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.371 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":9,"skipped":149,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:12.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Nov 13 00:58:12.460: INFO: Waiting up to 5m0s for pod "downward-api-a4dd029f-d9b9-421a-8b0b-590472e494d7" in namespace "downward-api-2445" to be "Succeeded or Failed" Nov 13 00:58:12.462: INFO: Pod "downward-api-a4dd029f-d9b9-421a-8b0b-590472e494d7": Phase="Pending", Reason="", readiness=false. Elapsed: 1.94986ms Nov 13 00:58:14.464: INFO: Pod "downward-api-a4dd029f-d9b9-421a-8b0b-590472e494d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004769733s Nov 13 00:58:16.468: INFO: Pod "downward-api-a4dd029f-d9b9-421a-8b0b-590472e494d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008242602s STEP: Saw pod success Nov 13 00:58:16.468: INFO: Pod "downward-api-a4dd029f-d9b9-421a-8b0b-590472e494d7" satisfied condition "Succeeded or Failed" Nov 13 00:58:16.471: INFO: Trying to get logs from node node2 pod downward-api-a4dd029f-d9b9-421a-8b0b-590472e494d7 container dapi-container: STEP: delete the pod Nov 13 00:58:16.488: INFO: Waiting for pod downward-api-a4dd029f-d9b9-421a-8b0b-590472e494d7 to disappear Nov 13 00:58:16.490: INFO: Pod downward-api-a4dd029f-d9b9-421a-8b0b-590472e494d7 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:16.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2445" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":131,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:12.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-407 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-407 STEP: Deleting pre-stop pod Nov 13 00:58:27.581: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:27.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-407" for this suite. • [SLOW TEST:15.078 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":21,"skipped":250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:16.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:58:16.535: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Nov 13 00:58:24.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5101 --namespace=crd-publish-openapi-5101 create -f -' Nov 13 00:58:25.060: INFO: stderr: "" Nov 13 00:58:25.060: INFO: stdout: "e2e-test-crd-publish-openapi-8971-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Nov 13 00:58:25.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5101 --namespace=crd-publish-openapi-5101 delete e2e-test-crd-publish-openapi-8971-crds test-cr' Nov 13 00:58:25.231: INFO: stderr: "" Nov 13 00:58:25.231: INFO: stdout: "e2e-test-crd-publish-openapi-8971-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Nov 13 00:58:25.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5101 --namespace=crd-publish-openapi-5101 apply -f -' Nov 13 00:58:25.593: INFO: stderr: "" Nov 13 00:58:25.593: INFO: stdout: "e2e-test-crd-publish-openapi-8971-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Nov 13 00:58:25.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5101 --namespace=crd-publish-openapi-5101 delete e2e-test-crd-publish-openapi-8971-crds test-cr' Nov 13 00:58:25.774: INFO: stderr: "" Nov 13 00:58:25.774: INFO: stdout: "e2e-test-crd-publish-openapi-8971-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Nov 13 00:58:25.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5101 explain e2e-test-crd-publish-openapi-8971-crds' Nov 13 00:58:26.133: INFO: stderr: "" Nov 13 00:58:26.133: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8971-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:29.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5101" for this suite. • [SLOW TEST:13.226 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":8,"skipped":136,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:27.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-a2488ff3-b854-4122-99f3-5e18ff22df04 STEP: Creating secret with name secret-projected-all-test-volume-0a04e6f2-e43b-41de-a79f-f81626f71060 STEP: Creating a pod to test Check all projections for projected volume plugin Nov 13 00:58:27.698: INFO: Waiting up to 5m0s for pod "projected-volume-ce9ae673-ac7f-498c-9de9-c19ee5cb3898" in namespace "projected-6355" to be "Succeeded or Failed" Nov 13 00:58:27.703: INFO: Pod "projected-volume-ce9ae673-ac7f-498c-9de9-c19ee5cb3898": Phase="Pending", Reason="", readiness=false. Elapsed: 5.329414ms Nov 13 00:58:29.707: INFO: Pod "projected-volume-ce9ae673-ac7f-498c-9de9-c19ee5cb3898": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008891065s Nov 13 00:58:31.710: INFO: Pod "projected-volume-ce9ae673-ac7f-498c-9de9-c19ee5cb3898": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011722695s Nov 13 00:58:33.714: INFO: Pod "projected-volume-ce9ae673-ac7f-498c-9de9-c19ee5cb3898": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015838211s STEP: Saw pod success Nov 13 00:58:33.714: INFO: Pod "projected-volume-ce9ae673-ac7f-498c-9de9-c19ee5cb3898" satisfied condition "Succeeded or Failed" Nov 13 00:58:33.717: INFO: Trying to get logs from node node2 pod projected-volume-ce9ae673-ac7f-498c-9de9-c19ee5cb3898 container projected-all-volume-test: STEP: delete the pod Nov 13 00:58:33.753: INFO: Waiting for pod projected-volume-ce9ae673-ac7f-498c-9de9-c19ee5cb3898 to disappear Nov 13 00:58:33.755: INFO: Pod projected-volume-ce9ae673-ac7f-498c-9de9-c19ee5cb3898 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:33.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6355" for this suite. • [SLOW TEST:6.120 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":277,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:29.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Nov 13 00:58:29.935: INFO: Pod name pod-release: Found 0 pods out of 1 Nov 13 00:58:34.939: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:35.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-292" for this suite. • [SLOW TEST:6.057 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":9,"skipped":221,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:33.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:58:33.799: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Nov 13 00:58:33.816: INFO: The status of Pod pod-exec-websocket-14419288-3646-4f6f-8165-cec64398c280 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:58:35.821: INFO: The status of Pod pod-exec-websocket-14419288-3646-4f6f-8165-cec64398c280 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:58:37.820: INFO: The status of Pod pod-exec-websocket-14419288-3646-4f6f-8165-cec64398c280 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:58:39.821: INFO: The status of Pod pod-exec-websocket-14419288-3646-4f6f-8165-cec64398c280 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:39.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1158" for this suite. • [SLOW TEST:6.144 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":280,"failed":0} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:36.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:58:36.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2930 create -f -' Nov 13 00:58:36.491: INFO: stderr: "" Nov 13 00:58:36.491: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Nov 13 00:58:36.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2930 create -f -' Nov 13 00:58:36.830: INFO: stderr: "" Nov 13 00:58:36.830: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Nov 13 00:58:37.834: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 00:58:37.834: INFO: Found 0 / 1 Nov 13 00:58:38.835: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 00:58:38.835: INFO: Found 0 / 1 Nov 13 00:58:39.834: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 00:58:39.834: INFO: Found 1 / 1 Nov 13 00:58:39.834: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Nov 13 00:58:39.837: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 00:58:39.837: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 13 00:58:39.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2930 describe pod agnhost-primary-9jtv4' Nov 13 00:58:40.039: INFO: stderr: "" Nov 13 00:58:40.040: INFO: stdout: "Name: agnhost-primary-9jtv4\nNamespace: kubectl-2930\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Sat, 13 Nov 2021 00:58:36 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.41\"\n ],\n \"mac\": \"32:81:35:15:17:ec\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.41\"\n ],\n \"mac\": \"32:81:35:15:17:ec\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.4.41\nIPs:\n IP: 10.244.4.41\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://42dc5ff62acef20e50016bda2612e80f147bac81d03612b5e8c43cad49eeab63\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 13 Nov 2021 00:58:38 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tbcm6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-tbcm6:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-2930/agnhost-primary-9jtv4 to node2\n Normal Pulling 2s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n Normal Pulled 2s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 279.380468ms\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" Nov 13 00:58:40.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2930 describe rc agnhost-primary' Nov 13 00:58:40.255: INFO: stderr: "" Nov 13 00:58:40.255: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2930\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-9jtv4\n" Nov 13 00:58:40.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2930 describe service agnhost-primary' Nov 13 00:58:40.436: INFO: stderr: "" Nov 13 00:58:40.436: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2930\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.233.9.51\nIPs: 10.233.9.51\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.4.41:6379\nSession Affinity: None\nEvents: \n" Nov 13 00:58:40.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2930 describe node master1' Nov 13 00:58:40.659: INFO: stderr: "" Nov 13 00:58:40.659: INFO: stdout: "Name: master1\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 12 Nov 2021 21:05:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Sat, 13 Nov 2021 00:58:38 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 12 Nov 2021 21:11:25 +0000 Fri, 12 Nov 2021 21:11:25 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Sat, 13 Nov 2021 00:58:38 +0000 Fri, 12 Nov 2021 21:05:48 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 13 Nov 2021 00:58:38 +0000 Fri, 12 Nov 2021 21:05:48 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 13 Nov 2021 00:58:38 +0000 Fri, 12 Nov 2021 21:05:48 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 13 Nov 2021 00:58:38 +0000 Fri, 12 Nov 2021 21:11:19 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 439913340Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518324Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 405424133473\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629492Ki\n pods: 110\nSystem Info:\n Machine ID: 94e600d00e79450a9fb632d8473a11eb\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: 6e4bb815-8b93-47c2-9321-93e7ada261f6\n Kernel Version: 3.10.0-1160.45.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.10\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system container-registry-65d7c44b96-qwqcz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h45m\n kube-system coredns-8474476ff8-9vc8b 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 3h49m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 3h43m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 3h51m\n kube-system kube-flannel-79bvx 150m (0%) 300m (0%) 64M (0%) 500M (0%) 3h50m\n kube-system kube-multus-ds-amd64-qtmwl 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 3h49m\n kube-system kube-proxy-6m7qt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h51m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 3h33m\n monitoring node-exporter-zm5hq 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 3h36m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1012m (1%) 670m (0%)\n memory 431140Ki (0%) 1003316480 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Nov 13 00:58:40.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2930 describe namespace kubectl-2930' Nov 13 00:58:40.840: INFO: stderr: "" Nov 13 00:58:40.840: INFO: stdout: "Name: kubectl-2930\nLabels: e2e-framework=kubectl\n e2e-run=5ecf98c6-62d8-4527-a71f-1dacf2b776e2\n kubernetes.io/metadata.name=kubectl-2930\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:40.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2930" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":10,"skipped":266,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:41.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1113 00:57:42.450640 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 00:58:44.467: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:44.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4649" for this suite. • [SLOW TEST:63.093 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":14,"skipped":138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:39.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:58:39.949: INFO: Creating simple deployment test-new-deployment Nov 13 00:58:39.958: INFO: deployment "test-new-deployment" doesn't have the required revision set Nov 13 00:58:41.965: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361919, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361919, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361919, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361919, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:58:43.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361919, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361919, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361919, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361919, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Nov 13 00:58:45.984: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-6893 88041fde-7c9a-43fd-9666-79aba7925775 68017 3 2021-11-13 00:58:39 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-11-13 00:58:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-13 00:58:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000992aa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-11-13 00:58:44 +0000 UTC,LastTransitionTime:2021-11-13 00:58:44 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-11-13 00:58:44 +0000 UTC,LastTransitionTime:2021-11-13 00:58:39 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Nov 13 00:58:45.987: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-6893 1d37b4aa-7b5e-4563-bcd7-33b197ceea64 68016 2 2021-11-13 00:58:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 88041fde-7c9a-43fd-9666-79aba7925775 0xc000992f87 0xc000992f88}] [] [{kube-controller-manager Update apps/v1 2021-11-13 00:58:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"88041fde-7c9a-43fd-9666-79aba7925775\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000992ff8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Nov 13 00:58:45.990: INFO: Pod "test-new-deployment-847dcfb7fb-c6tf2" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-c6tf2 test-new-deployment-847dcfb7fb- deployment-6893 14624451-d429-4901-9bf2-bfcc8cbe56a8 68021 0 2021-11-13 00:58:45 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 1d37b4aa-7b5e-4563-bcd7-33b197ceea64 0xc00099348f 0xc0009934a0}] [] [{kube-controller-manager Update v1 2021-11-13 00:58:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d37b4aa-7b5e-4563-bcd7-33b197ceea64\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-l5tmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l5tmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 00:58:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 00:58:45.991: INFO: Pod "test-new-deployment-847dcfb7fb-nrhb6" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-nrhb6 test-new-deployment-847dcfb7fb- deployment-6893 69201dda-4352-42a1-8911-ca77ca6713c4 67983 0 2021-11-13 00:58:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.42" ], "mac": "f2:fc:e5:e3:4f:eb", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.42" ], "mac": "f2:fc:e5:e3:4f:eb", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 1d37b4aa-7b5e-4563-bcd7-33b197ceea64 0xc0009936af 0xc0009936e0}] [] [{kube-controller-manager Update v1 2021-11-13 00:58:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d37b4aa-7b5e-4563-bcd7-33b197ceea64\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-13 00:58:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-13 00:58:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t65pl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t65pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 00:58:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 00:58:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 00:58:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 00:58:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.42,StartTime:2021-11-13 00:58:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-13 00:58:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://65624c7bbd24096b92a98f575819d8f3400ff8f4698ba7a5a5b4416651300281,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:45.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6893" for this suite. • [SLOW TEST:6.073 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":24,"skipped":282,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:46.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-73353926-d2c1-4f94-98c9-08630e9f1228 STEP: Creating a pod to test consume configMaps Nov 13 00:58:46.079: INFO: Waiting up to 5m0s for pod "pod-configmaps-56157022-a991-48b3-8449-cb6e8277fe6f" in namespace "configmap-7182" to be "Succeeded or Failed" Nov 13 00:58:46.081: INFO: Pod "pod-configmaps-56157022-a991-48b3-8449-cb6e8277fe6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290647ms Nov 13 00:58:48.085: INFO: Pod "pod-configmaps-56157022-a991-48b3-8449-cb6e8277fe6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006644855s Nov 13 00:58:50.089: INFO: Pod "pod-configmaps-56157022-a991-48b3-8449-cb6e8277fe6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010556094s STEP: Saw pod success Nov 13 00:58:50.089: INFO: Pod "pod-configmaps-56157022-a991-48b3-8449-cb6e8277fe6f" satisfied condition "Succeeded or Failed" Nov 13 00:58:50.092: INFO: Trying to get logs from node node1 pod pod-configmaps-56157022-a991-48b3-8449-cb6e8277fe6f container agnhost-container: STEP: delete the pod Nov 13 00:58:50.106: INFO: Waiting for pod pod-configmaps-56157022-a991-48b3-8449-cb6e8277fe6f to disappear Nov 13 00:58:50.107: INFO: Pod pod-configmaps-56157022-a991-48b3-8449-cb6e8277fe6f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:50.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7182" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":303,"failed":0} S ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:44.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3314.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3314.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 13 00:58:50.638: INFO: DNS probes using dns-3314/dns-test-657a6701-9f41-4d87-8337-3f07371e854a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:58:50.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3314" for this suite. • [SLOW TEST:6.081 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":15,"skipped":185,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:40.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-c005be3f-6dff-4372-9204-1a2f6b46fd80 in namespace container-probe-5471 Nov 13 00:56:44.114: INFO: Started pod liveness-c005be3f-6dff-4372-9204-1a2f6b46fd80 in namespace container-probe-5471 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 00:56:44.117: INFO: Initial restart count of pod liveness-c005be3f-6dff-4372-9204-1a2f6b46fd80 is 0 Nov 13 00:57:04.158: INFO: Restart count of pod container-probe-5471/liveness-c005be3f-6dff-4372-9204-1a2f6b46fd80 is now 1 (20.041525989s elapsed) Nov 13 00:57:26.200: INFO: Restart count of pod container-probe-5471/liveness-c005be3f-6dff-4372-9204-1a2f6b46fd80 is now 2 (42.082862101s elapsed) Nov 13 00:57:42.231: INFO: Restart count of pod container-probe-5471/liveness-c005be3f-6dff-4372-9204-1a2f6b46fd80 is now 3 (58.114359419s elapsed) Nov 13 00:58:02.271: INFO: Restart count of pod container-probe-5471/liveness-c005be3f-6dff-4372-9204-1a2f6b46fd80 is now 4 (1m18.154501422s elapsed) Nov 13 00:59:02.409: INFO: Restart count of pod container-probe-5471/liveness-c005be3f-6dff-4372-9204-1a2f6b46fd80 is now 5 (2m18.29203594s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:02.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5471" for this suite. • [SLOW TEST:142.354 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:02.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:02.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2908" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":15,"skipped":389,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:02.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 00:59:02.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06f0026d-dd8d-4037-920c-864be9d21338" in namespace "downward-api-251" to be "Succeeded or Failed" Nov 13 00:59:02.662: INFO: Pod "downwardapi-volume-06f0026d-dd8d-4037-920c-864be9d21338": Phase="Pending", Reason="", readiness=false. Elapsed: 3.346861ms Nov 13 00:59:04.666: INFO: Pod "downwardapi-volume-06f0026d-dd8d-4037-920c-864be9d21338": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007424468s Nov 13 00:59:06.670: INFO: Pod "downwardapi-volume-06f0026d-dd8d-4037-920c-864be9d21338": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011416098s STEP: Saw pod success Nov 13 00:59:06.670: INFO: Pod "downwardapi-volume-06f0026d-dd8d-4037-920c-864be9d21338" satisfied condition "Succeeded or Failed" Nov 13 00:59:06.673: INFO: Trying to get logs from node node2 pod downwardapi-volume-06f0026d-dd8d-4037-920c-864be9d21338 container client-container: STEP: delete the pod Nov 13 00:59:06.687: INFO: Waiting for pod downwardapi-volume-06f0026d-dd8d-4037-920c-864be9d21338 to disappear Nov 13 00:59:06.690: INFO: Pod downwardapi-volume-06f0026d-dd8d-4037-920c-864be9d21338 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:06.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-251" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:06.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:06.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6849" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":17,"skipped":430,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:40.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-2603 STEP: creating service affinity-clusterip in namespace services-2603 STEP: creating replication controller affinity-clusterip in namespace services-2603 I1113 00:58:40.898747 27 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-2603, replica count: 3 I1113 00:58:43.949562 27 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:58:46.950443 27 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 13 00:58:46.956: INFO: Creating new exec pod Nov 13 00:58:51.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2603 exec execpod-affinity5ssqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Nov 13 00:58:52.634: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Nov 13 00:58:52.634: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 13 00:58:52.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2603 exec execpod-affinity5ssqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.30.0 80' Nov 13 00:58:52.908: INFO: stderr: "+ nc -v -t -w 2 10.233.30.0 80\n+ echo hostName\nConnection to 10.233.30.0 80 port [tcp/http] succeeded!\n" Nov 13 00:58:52.908: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 13 00:58:52.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2603 exec execpod-affinity5ssqx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.30.0:80/ ; done' Nov 13 00:58:53.530: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.0:80/\n" Nov 13 00:58:53.530: INFO: stdout: "\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p\naffinity-clusterip-jsw8p" Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.530: INFO: Received response from host: affinity-clusterip-jsw8p Nov 13 00:58:53.531: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-2603, will wait for the garbage collector to delete the pods Nov 13 00:58:53.599: INFO: Deleting ReplicationController affinity-clusterip took: 3.97512ms Nov 13 00:58:53.700: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.752927ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:11.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2603" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:30.652 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":272,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:06.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:12.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5791" for this suite. • [SLOW TEST:5.526 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":18,"skipped":434,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:12.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:59:12.479: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Nov 13 00:59:12.493: INFO: The status of Pod pod-logs-websocket-aa85e2e6-dbb8-4b36-9629-3aa41fb6f536 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:59:14.495: INFO: The status of Pod pod-logs-websocket-aa85e2e6-dbb8-4b36-9629-3aa41fb6f536 is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:59:16.497: INFO: The status of Pod pod-logs-websocket-aa85e2e6-dbb8-4b36-9629-3aa41fb6f536 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:16.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-319" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:16.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 00:59:16.623: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27dab820-75d1-4af6-8bc7-2fecc8490959" in namespace "downward-api-11" to be "Succeeded or Failed" Nov 13 00:59:16.625: INFO: Pod "downwardapi-volume-27dab820-75d1-4af6-8bc7-2fecc8490959": Phase="Pending", Reason="", readiness=false. Elapsed: 2.439257ms Nov 13 00:59:18.630: INFO: Pod "downwardapi-volume-27dab820-75d1-4af6-8bc7-2fecc8490959": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007783311s Nov 13 00:59:20.634: INFO: Pod "downwardapi-volume-27dab820-75d1-4af6-8bc7-2fecc8490959": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011808904s STEP: Saw pod success Nov 13 00:59:20.635: INFO: Pod "downwardapi-volume-27dab820-75d1-4af6-8bc7-2fecc8490959" satisfied condition "Succeeded or Failed" Nov 13 00:59:20.637: INFO: Trying to get logs from node node2 pod downwardapi-volume-27dab820-75d1-4af6-8bc7-2fecc8490959 container client-container: STEP: delete the pod Nov 13 00:59:20.649: INFO: Waiting for pod downwardapi-volume-27dab820-75d1-4af6-8bc7-2fecc8490959 to disappear Nov 13 00:59:20.651: INFO: Pod downwardapi-volume-27dab820-75d1-4af6-8bc7-2fecc8490959 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:20.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-11" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":501,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:20.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Nov 13 00:59:20.709: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. Nov 13 00:59:21.111: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Nov 13 00:59:23.142: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:59:25.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:59:27.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:59:29.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:59:31.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:59:33.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361961, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:59:35.967: INFO: Waited 815.71186ms for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Nov 13 00:59:36.369: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:37.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6425" for this suite. • [SLOW TEST:16.577 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":21,"skipped":510,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:37.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:59:37.823: INFO: Checking APIGroup: apiregistration.k8s.io Nov 13 00:59:37.825: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Nov 13 00:59:37.825: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.825: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Nov 13 00:59:37.825: INFO: Checking APIGroup: apps Nov 13 00:59:37.825: INFO: PreferredVersion.GroupVersion: apps/v1 Nov 13 00:59:37.825: INFO: Versions found [{apps/v1 v1}] Nov 13 00:59:37.825: INFO: apps/v1 matches apps/v1 Nov 13 00:59:37.825: INFO: Checking APIGroup: events.k8s.io Nov 13 00:59:37.826: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Nov 13 00:59:37.826: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.826: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Nov 13 00:59:37.826: INFO: Checking APIGroup: authentication.k8s.io Nov 13 00:59:37.827: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Nov 13 00:59:37.827: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.827: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Nov 13 00:59:37.827: INFO: Checking APIGroup: authorization.k8s.io Nov 13 00:59:37.828: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Nov 13 00:59:37.828: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.828: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Nov 13 00:59:37.828: INFO: Checking APIGroup: autoscaling Nov 13 00:59:37.829: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Nov 13 00:59:37.829: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Nov 13 00:59:37.829: INFO: autoscaling/v1 matches autoscaling/v1 Nov 13 00:59:37.829: INFO: Checking APIGroup: batch Nov 13 00:59:37.830: INFO: PreferredVersion.GroupVersion: batch/v1 Nov 13 00:59:37.830: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Nov 13 00:59:37.830: INFO: batch/v1 matches batch/v1 Nov 13 00:59:37.830: INFO: Checking APIGroup: certificates.k8s.io Nov 13 00:59:37.831: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Nov 13 00:59:37.831: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.831: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Nov 13 00:59:37.831: INFO: Checking APIGroup: networking.k8s.io Nov 13 00:59:37.832: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Nov 13 00:59:37.832: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.832: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Nov 13 00:59:37.832: INFO: Checking APIGroup: extensions Nov 13 00:59:37.833: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Nov 13 00:59:37.833: INFO: Versions found [{extensions/v1beta1 v1beta1}] Nov 13 00:59:37.833: INFO: extensions/v1beta1 matches extensions/v1beta1 Nov 13 00:59:37.833: INFO: Checking APIGroup: policy Nov 13 00:59:37.833: INFO: PreferredVersion.GroupVersion: policy/v1 Nov 13 00:59:37.833: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Nov 13 00:59:37.833: INFO: policy/v1 matches policy/v1 Nov 13 00:59:37.833: INFO: Checking APIGroup: rbac.authorization.k8s.io Nov 13 00:59:37.834: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Nov 13 00:59:37.834: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.834: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Nov 13 00:59:37.834: INFO: Checking APIGroup: storage.k8s.io Nov 13 00:59:37.835: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Nov 13 00:59:37.835: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.835: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Nov 13 00:59:37.835: INFO: Checking APIGroup: admissionregistration.k8s.io Nov 13 00:59:37.836: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Nov 13 00:59:37.836: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.836: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Nov 13 00:59:37.836: INFO: Checking APIGroup: apiextensions.k8s.io Nov 13 00:59:37.837: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Nov 13 00:59:37.837: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.837: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Nov 13 00:59:37.837: INFO: Checking APIGroup: scheduling.k8s.io Nov 13 00:59:37.838: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Nov 13 00:59:37.838: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.838: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Nov 13 00:59:37.838: INFO: Checking APIGroup: coordination.k8s.io Nov 13 00:59:37.839: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Nov 13 00:59:37.839: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.839: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Nov 13 00:59:37.839: INFO: Checking APIGroup: node.k8s.io Nov 13 00:59:37.839: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Nov 13 00:59:37.839: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.839: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Nov 13 00:59:37.839: INFO: Checking APIGroup: discovery.k8s.io Nov 13 00:59:37.840: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Nov 13 00:59:37.840: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.840: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Nov 13 00:59:37.840: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Nov 13 00:59:37.843: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Nov 13 00:59:37.843: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.843: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Nov 13 00:59:37.843: INFO: Checking APIGroup: intel.com Nov 13 00:59:37.844: INFO: PreferredVersion.GroupVersion: intel.com/v1 Nov 13 00:59:37.844: INFO: Versions found [{intel.com/v1 v1}] Nov 13 00:59:37.844: INFO: intel.com/v1 matches intel.com/v1 Nov 13 00:59:37.844: INFO: Checking APIGroup: k8s.cni.cncf.io Nov 13 00:59:37.845: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 Nov 13 00:59:37.845: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] Nov 13 00:59:37.845: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 Nov 13 00:59:37.845: INFO: Checking APIGroup: monitoring.coreos.com Nov 13 00:59:37.846: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 Nov 13 00:59:37.846: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] Nov 13 00:59:37.846: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 Nov 13 00:59:37.846: INFO: Checking APIGroup: telemetry.intel.com Nov 13 00:59:37.847: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 Nov 13 00:59:37.847: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] Nov 13 00:59:37.847: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 Nov 13 00:59:37.847: INFO: Checking APIGroup: custom.metrics.k8s.io Nov 13 00:59:37.848: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 Nov 13 00:59:37.848: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] Nov 13 00:59:37.848: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:37.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-5092" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":22,"skipped":519,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:07.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-6756 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Nov 13 00:57:07.947: INFO: Found 0 stateful pods, waiting for 3 Nov 13 00:57:17.959: INFO: Found 2 stateful pods, waiting for 3 Nov 13 00:57:27.952: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 13 00:57:27.952: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 13 00:57:27.952: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Nov 13 00:57:28.011: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Nov 13 00:57:38.040: INFO: Updating stateful set ss2 Nov 13 00:57:38.044: INFO: Waiting for Pod statefulset-6756/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 13 00:57:48.062: INFO: Waiting for Pod statefulset-6756/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted Nov 13 00:57:58.110: INFO: Found 2 stateful pods, waiting for 3 Nov 13 00:58:08.140: INFO: Found 2 stateful pods, waiting for 3 Nov 13 00:58:18.114: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 13 00:58:18.114: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 13 00:58:18.114: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Nov 13 00:58:18.136: INFO: Updating stateful set ss2 Nov 13 00:58:18.191: INFO: Waiting for Pod statefulset-6756/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 13 00:58:28.231: INFO: Waiting for Pod statefulset-6756/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 13 00:58:38.217: INFO: Updating stateful set ss2 Nov 13 00:58:38.222: INFO: Waiting for StatefulSet statefulset-6756/ss2 to complete update Nov 13 00:58:38.223: INFO: Waiting for Pod statefulset-6756/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 13 00:58:48.229: INFO: Waiting for StatefulSet statefulset-6756/ss2 to complete update Nov 13 00:58:48.229: INFO: Waiting for Pod statefulset-6756/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 13 00:58:58.228: INFO: Waiting for StatefulSet statefulset-6756/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Nov 13 00:59:08.233: INFO: Deleting all statefulset in ns statefulset-6756 Nov 13 00:59:08.235: INFO: Scaling statefulset ss2 to 0 Nov 13 00:59:38.250: INFO: Waiting for statefulset status.replicas updated to 0 Nov 13 00:59:38.252: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:38.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6756" for this suite. • [SLOW TEST:150.374 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":20,"skipped":375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:11.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Nov 13 00:59:11.553: INFO: >>> kubeConfig: /root/.kube/config Nov 13 00:59:20.139: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:38.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-353" for this suite. • [SLOW TEST:27.452 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":12,"skipped":278,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:56:36.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-8405 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Nov 13 00:56:36.695: INFO: Found 0 stateful pods, waiting for 3 Nov 13 00:56:46.698: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 13 00:56:46.698: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 13 00:56:46.698: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Nov 13 00:56:46.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8405 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 13 00:56:47.093: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 13 00:56:47.093: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 13 00:56:47.093: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Nov 13 00:56:57.123: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Nov 13 00:57:07.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8405 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 13 00:57:07.410: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 13 00:57:07.410: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 13 00:57:07.410: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 13 00:57:17.427: INFO: Waiting for StatefulSet statefulset-8405/ss2 to complete update Nov 13 00:57:17.427: INFO: Waiting for Pod statefulset-8405/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 13 00:57:17.427: INFO: Waiting for Pod statefulset-8405/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 13 00:57:27.435: INFO: Waiting for StatefulSet statefulset-8405/ss2 to complete update Nov 13 00:57:27.435: INFO: Waiting for Pod statefulset-8405/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 13 00:57:27.435: INFO: Waiting for Pod statefulset-8405/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 13 00:57:37.433: INFO: Waiting for StatefulSet statefulset-8405/ss2 to complete update Nov 13 00:57:37.433: INFO: Waiting for Pod statefulset-8405/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Nov 13 00:57:47.434: INFO: Waiting for StatefulSet statefulset-8405/ss2 to complete update Nov 13 00:57:47.434: INFO: Waiting for Pod statefulset-8405/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Rolling back to a previous revision Nov 13 00:57:57.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8405 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 13 00:57:57.828: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 13 00:57:57.828: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 13 00:57:57.828: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 13 00:58:07.859: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Nov 13 00:58:17.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8405 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 13 00:58:19.004: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 13 00:58:19.004: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 13 00:58:19.004: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 13 00:58:29.023: INFO: Waiting for StatefulSet statefulset-8405/ss2 to complete update Nov 13 00:58:29.023: INFO: Waiting for Pod statefulset-8405/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Nov 13 00:58:29.023: INFO: Waiting for Pod statefulset-8405/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Nov 13 00:58:29.023: INFO: Waiting for Pod statefulset-8405/ss2-2 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Nov 13 00:58:39.029: INFO: Waiting for StatefulSet statefulset-8405/ss2 to complete update Nov 13 00:58:39.029: INFO: Waiting for Pod statefulset-8405/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Nov 13 00:58:39.029: INFO: Waiting for Pod statefulset-8405/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Nov 13 00:58:49.030: INFO: Waiting for StatefulSet statefulset-8405/ss2 to complete update Nov 13 00:58:49.030: INFO: Waiting for Pod statefulset-8405/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Nov 13 00:58:59.032: INFO: Deleting all statefulset in ns statefulset-8405 Nov 13 00:58:59.035: INFO: Scaling statefulset ss2 to 0 Nov 13 00:59:39.052: INFO: Waiting for statefulset status.replicas updated to 0 Nov 13 00:59:39.054: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:39.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8405" for this suite. • [SLOW TEST:182.413 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":10,"skipped":200,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:50.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-799216a6-9dd7-412d-8a0f-18f41e7441f7 in namespace container-probe-2485 Nov 13 00:58:54.701: INFO: Started pod busybox-799216a6-9dd7-412d-8a0f-18f41e7441f7 in namespace container-probe-2485 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 00:58:54.703: INFO: Initial restart count of pod busybox-799216a6-9dd7-412d-8a0f-18f41e7441f7 is 0 Nov 13 00:59:42.805: INFO: Restart count of pod container-probe-2485/busybox-799216a6-9dd7-412d-8a0f-18f41e7441f7 is now 1 (48.101579975s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:42.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2485" for this suite. • [SLOW TEST:52.159 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:38.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Nov 13 00:59:39.013: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:46.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4237" for this suite. • [SLOW TEST:7.694 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":13,"skipped":280,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:39.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Nov 13 00:59:39.144: INFO: The status of Pod labelsupdate6964b44a-cf59-43b4-a31c-b74d75ea04ed is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:59:41.147: INFO: The status of Pod labelsupdate6964b44a-cf59-43b4-a31c-b74d75ea04ed is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:59:43.148: INFO: The status of Pod labelsupdate6964b44a-cf59-43b4-a31c-b74d75ea04ed is Pending, waiting for it to be Running (with Ready = true) Nov 13 00:59:45.148: INFO: The status of Pod labelsupdate6964b44a-cf59-43b4-a31c-b74d75ea04ed is Running (Ready = true) Nov 13 00:59:45.667: INFO: Successfully updated pod "labelsupdate6964b44a-cf59-43b4-a31c-b74d75ea04ed" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:47.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6196" for this suite. • [SLOW TEST:8.581 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:42.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Nov 13 00:59:43.364: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 00:59:43.375: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 00:59:45.386: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361983, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361983, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:59:47.389: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361983, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361983, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361983, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 00:59:50.398: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:51.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5830" for this suite. STEP: Destroying namespace "webhook-5830-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.558 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":17,"skipped":233,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:50.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1113 00:58:51.180482 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 00:59:53.199: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:53.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9750" for this suite. • [SLOW TEST:63.089 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":26,"skipped":304,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:21.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-7899 STEP: creating service affinity-nodeport in namespace services-7899 STEP: creating replication controller affinity-nodeport in namespace services-7899 I1113 00:57:21.546938 29 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-7899, replica count: 3 I1113 00:57:24.597347 29 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:57:27.597613 29 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:57:30.597894 29 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 13 00:57:30.606: INFO: Creating new exec pod Nov 13 00:57:37.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Nov 13 00:57:37.931: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Nov 13 00:57:37.931: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 13 00:57:37.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.22.3 80' Nov 13 00:57:38.284: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.22.3 80\nConnection to 10.233.22.3 80 port [tcp/http] succeeded!\n" Nov 13 00:57:38.284: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 13 00:57:38.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:38.674: INFO: rc: 1 Nov 13 00:57:38.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:39.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:40.014: INFO: rc: 1 Nov 13 00:57:40.014: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:40.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:40.937: INFO: rc: 1 Nov 13 00:57:40.937: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:41.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:41.946: INFO: rc: 1 Nov 13 00:57:41.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:42.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:43.450: INFO: rc: 1 Nov 13 00:57:43.450: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:43.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:44.314: INFO: rc: 1 Nov 13 00:57:44.314: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:44.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:44.949: INFO: rc: 1 Nov 13 00:57:44.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:45.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:46.072: INFO: rc: 1 Nov 13 00:57:46.072: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:46.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:47.066: INFO: rc: 1 Nov 13 00:57:47.066: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:47.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:47.910: INFO: rc: 1 Nov 13 00:57:47.910: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:48.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:48.915: INFO: rc: 1 Nov 13 00:57:48.915: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:49.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:49.936: INFO: rc: 1 Nov 13 00:57:49.936: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:50.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:50.940: INFO: rc: 1 Nov 13 00:57:50.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:51.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:51.989: INFO: rc: 1 Nov 13 00:57:51.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:52.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:53.175: INFO: rc: 1 Nov 13 00:57:53.175: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:53.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:53.956: INFO: rc: 1 Nov 13 00:57:53.956: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:54.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:54.929: INFO: rc: 1 Nov 13 00:57:54.929: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:55.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:55.917: INFO: rc: 1 Nov 13 00:57:55.917: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:56.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:56.941: INFO: rc: 1 Nov 13 00:57:56.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:57.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:58.278: INFO: rc: 1 Nov 13 00:57:58.278: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:58.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:58.988: INFO: rc: 1 Nov 13 00:57:58.988: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:59.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:57:59.932: INFO: rc: 1 Nov 13 00:57:59.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo+ hostName nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:00.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:00.929: INFO: rc: 1 Nov 13 00:58:00.929: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:01.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:01.970: INFO: rc: 1 Nov 13 00:58:01.970: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:02.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:02.987: INFO: rc: 1 Nov 13 00:58:02.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:03.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:03.947: INFO: rc: 1 Nov 13 00:58:03.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:04.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:04.972: INFO: rc: 1 Nov 13 00:58:04.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:05.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:05.917: INFO: rc: 1 Nov 13 00:58:05.917: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:06.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:07.153: INFO: rc: 1 Nov 13 00:58:07.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:07.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:07.938: INFO: rc: 1 Nov 13 00:58:07.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:08.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:08.949: INFO: rc: 1 Nov 13 00:58:08.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:09.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:09.940: INFO: rc: 1 Nov 13 00:58:09.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:10.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:10.948: INFO: rc: 1 Nov 13 00:58:10.948: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:11.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:11.947: INFO: rc: 1 Nov 13 00:58:11.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:12.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:13.169: INFO: rc: 1 Nov 13 00:58:13.169: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:13.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:14.177: INFO: rc: 1 Nov 13 00:58:14.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30177 + echo hostName nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:14.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:14.952: INFO: rc: 1 Nov 13 00:58:14.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:15.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:15.923: INFO: rc: 1 Nov 13 00:58:15.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:16.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:16.915: INFO: rc: 1 Nov 13 00:58:16.915: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:17.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:18.158: INFO: rc: 1 Nov 13 00:58:18.158: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:18.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:18.984: INFO: rc: 1 Nov 13 00:58:18.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:19.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:19.959: INFO: rc: 1 Nov 13 00:58:19.959: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:20.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:20.990: INFO: rc: 1 Nov 13 00:58:20.990: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:21.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:21.956: INFO: rc: 1 Nov 13 00:58:21.956: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:22.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:22.948: INFO: rc: 1 Nov 13 00:58:22.948: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:23.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:23.972: INFO: rc: 1 Nov 13 00:58:23.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:24.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:24.928: INFO: rc: 1 Nov 13 00:58:24.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:25.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:25.917: INFO: rc: 1 Nov 13 00:58:25.917: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:26.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:26.935: INFO: rc: 1 Nov 13 00:58:26.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:27.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:28.292: INFO: rc: 1 Nov 13 00:58:28.292: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:28.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:28.955: INFO: rc: 1 Nov 13 00:58:28.955: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:29.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:30.191: INFO: rc: 1 Nov 13 00:58:30.191: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:30.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:30.926: INFO: rc: 1 Nov 13 00:58:30.926: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:31.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:31.914: INFO: rc: 1 Nov 13 00:58:31.914: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:32.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:32.967: INFO: rc: 1 Nov 13 00:58:32.967: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:33.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:33.940: INFO: rc: 1 Nov 13 00:58:33.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:34.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:34.934: INFO: rc: 1 Nov 13 00:58:34.934: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:35.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:36.055: INFO: rc: 1 Nov 13 00:58:36.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:36.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:37.025: INFO: rc: 1 Nov 13 00:58:37.025: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:37.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:38.313: INFO: rc: 1 Nov 13 00:58:38.313: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:38.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:38.954: INFO: rc: 1 Nov 13 00:58:38.954: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:39.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:39.987: INFO: rc: 1 Nov 13 00:58:39.988: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:40.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:41.120: INFO: rc: 1 Nov 13 00:58:41.120: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:41.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:42.334: INFO: rc: 1 Nov 13 00:58:42.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:42.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:43.190: INFO: rc: 1 Nov 13 00:58:43.190: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:43.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:44.186: INFO: rc: 1 Nov 13 00:58:44.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:44.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:44.987: INFO: rc: 1 Nov 13 00:58:44.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:45.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:46.231: INFO: rc: 1 Nov 13 00:58:46.231: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:46.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:47.284: INFO: rc: 1 Nov 13 00:58:47.284: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:47.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:47.923: INFO: rc: 1 Nov 13 00:58:47.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:48.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:49.321: INFO: rc: 1 Nov 13 00:58:49.321: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:49.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:50.134: INFO: rc: 1 Nov 13 00:58:50.134: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:50.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:50.964: INFO: rc: 1 Nov 13 00:58:50.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:51.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:51.997: INFO: rc: 1 Nov 13 00:58:51.997: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:52.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:52.958: INFO: rc: 1 Nov 13 00:58:52.958: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:53.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:54.061: INFO: rc: 1 Nov 13 00:58:54.061: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:54.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:55.329: INFO: rc: 1 Nov 13 00:58:55.329: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:55.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:55.959: INFO: rc: 1 Nov 13 00:58:55.959: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:56.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:57.950: INFO: rc: 1 Nov 13 00:58:57.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:58.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:58.920: INFO: rc: 1 Nov 13 00:58:58.920: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:59.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:58:59.961: INFO: rc: 1 Nov 13 00:58:59.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:00.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:00.933: INFO: rc: 1 Nov 13 00:59:00.934: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:01.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:01.963: INFO: rc: 1 Nov 13 00:59:01.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:02.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:02.986: INFO: rc: 1 Nov 13 00:59:02.986: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:03.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:04.340: INFO: rc: 1 Nov 13 00:59:04.340: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:04.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:04.962: INFO: rc: 1 Nov 13 00:59:04.962: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:05.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:05.931: INFO: rc: 1 Nov 13 00:59:05.931: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:06.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:06.912: INFO: rc: 1 Nov 13 00:59:06.912: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:07.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:07.942: INFO: rc: 1 Nov 13 00:59:07.942: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:08.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:08.921: INFO: rc: 1 Nov 13 00:59:08.921: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30177 + echo hostName nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:09.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:09.909: INFO: rc: 1 Nov 13 00:59:09.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:10.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:11.033: INFO: rc: 1 Nov 13 00:59:11.033: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:11.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:12.009: INFO: rc: 1 Nov 13 00:59:12.009: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:12.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:12.969: INFO: rc: 1 Nov 13 00:59:12.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:13.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:13.950: INFO: rc: 1 Nov 13 00:59:13.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:14.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:14.937: INFO: rc: 1 Nov 13 00:59:14.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:15.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:15.932: INFO: rc: 1 Nov 13 00:59:15.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:16.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:16.923: INFO: rc: 1 Nov 13 00:59:16.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:17.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:18.196: INFO: rc: 1 Nov 13 00:59:18.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:18.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:18.986: INFO: rc: 1 Nov 13 00:59:18.986: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:19.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:19.926: INFO: rc: 1 Nov 13 00:59:19.926: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:20.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:20.901: INFO: rc: 1 Nov 13 00:59:20.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:21.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:22.109: INFO: rc: 1 Nov 13 00:59:22.109: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30177 + echo hostName nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:22.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:22.924: INFO: rc: 1 Nov 13 00:59:22.924: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:23.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:23.920: INFO: rc: 1 Nov 13 00:59:23.920: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:24.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:24.925: INFO: rc: 1 Nov 13 00:59:24.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:25.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:25.913: INFO: rc: 1 Nov 13 00:59:25.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:26.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:26.913: INFO: rc: 1 Nov 13 00:59:26.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:27.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:28.285: INFO: rc: 1 Nov 13 00:59:28.285: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:28.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:28.924: INFO: rc: 1 Nov 13 00:59:28.924: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:29.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:29.908: INFO: rc: 1 Nov 13 00:59:29.908: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:30.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:30.936: INFO: rc: 1 Nov 13 00:59:30.936: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:31.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:31.908: INFO: rc: 1 Nov 13 00:59:31.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:32.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:32.910: INFO: rc: 1 Nov 13 00:59:32.910: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:33.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:33.919: INFO: rc: 1 Nov 13 00:59:33.919: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:34.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:34.911: INFO: rc: 1 Nov 13 00:59:34.912: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:35.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:35.940: INFO: rc: 1 Nov 13 00:59:35.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:36.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:36.901: INFO: rc: 1 Nov 13 00:59:36.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:37.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:37.924: INFO: rc: 1 Nov 13 00:59:37.924: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:38.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:38.938: INFO: rc: 1 Nov 13 00:59:38.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:38.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177' Nov 13 00:59:39.181: INFO: rc: 1 Nov 13 00:59:39.182: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7899 exec execpod-affinitytf9zk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30177: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30177 nc: connect to 10.10.190.207 port 30177 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:39.182: FAIL: Unexpected error: <*errors.errorString | 0xc003c48970>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30177 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30177 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc0011eedc0, 0x779f8f8, 0xc0033aa9a0, 0xc001604280, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2531 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0017e2f00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0017e2f00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0017e2f00, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Nov 13 00:59:39.184: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-7899, will wait for the garbage collector to delete the pods Nov 13 00:59:39.260: INFO: Deleting ReplicationController affinity-nodeport took: 4.701473ms Nov 13 00:59:39.361: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.727109ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-7899". STEP: Found 27 events. Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:21 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-dnwfb Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:21 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-98pq9 Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:21 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-8ms79 Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:21 +0000 UTC - event for affinity-nodeport-8ms79: {default-scheduler } Scheduled: Successfully assigned services-7899/affinity-nodeport-8ms79 to node1 Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:21 +0000 UTC - event for affinity-nodeport-98pq9: {default-scheduler } Scheduled: Successfully assigned services-7899/affinity-nodeport-98pq9 to node1 Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:21 +0000 UTC - event for affinity-nodeport-dnwfb: {default-scheduler } Scheduled: Successfully assigned services-7899/affinity-nodeport-dnwfb to node1 Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:24 +0000 UTC - event for affinity-nodeport-8ms79: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:24 +0000 UTC - event for affinity-nodeport-8ms79: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 343.664987ms Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:24 +0000 UTC - event for affinity-nodeport-98pq9: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:24 +0000 UTC - event for affinity-nodeport-dnwfb: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:25 +0000 UTC - event for affinity-nodeport-8ms79: {kubelet node1} Created: Created container affinity-nodeport Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:25 +0000 UTC - event for affinity-nodeport-98pq9: {kubelet node1} Created: Created container affinity-nodeport Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:25 +0000 UTC - event for affinity-nodeport-98pq9: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 608.848186ms Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:25 +0000 UTC - event for affinity-nodeport-dnwfb: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 914.368517ms Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:26 +0000 UTC - event for affinity-nodeport-8ms79: {kubelet node1} Started: Started container affinity-nodeport Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:26 +0000 UTC - event for affinity-nodeport-98pq9: {kubelet node1} Started: Started container affinity-nodeport Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:26 +0000 UTC - event for affinity-nodeport-dnwfb: {kubelet node1} Created: Created container affinity-nodeport Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:26 +0000 UTC - event for affinity-nodeport-dnwfb: {kubelet node1} Started: Started container affinity-nodeport Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:30 +0000 UTC - event for execpod-affinitytf9zk: {default-scheduler } Scheduled: Successfully assigned services-7899/execpod-affinitytf9zk to node2 Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:33 +0000 UTC - event for execpod-affinitytf9zk: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 347.545049ms Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:33 +0000 UTC - event for execpod-affinitytf9zk: {kubelet node2} Created: Created container agnhost-container Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:33 +0000 UTC - event for execpod-affinitytf9zk: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 00:59:51.477: INFO: At 2021-11-13 00:57:34 +0000 UTC - event for execpod-affinitytf9zk: {kubelet node2} Started: Started container agnhost-container Nov 13 00:59:51.477: INFO: At 2021-11-13 00:59:39 +0000 UTC - event for affinity-nodeport-8ms79: {kubelet node1} Killing: Stopping container affinity-nodeport Nov 13 00:59:51.477: INFO: At 2021-11-13 00:59:39 +0000 UTC - event for affinity-nodeport-98pq9: {kubelet node1} Killing: Stopping container affinity-nodeport Nov 13 00:59:51.477: INFO: At 2021-11-13 00:59:39 +0000 UTC - event for affinity-nodeport-dnwfb: {kubelet node1} Killing: Stopping container affinity-nodeport Nov 13 00:59:51.477: INFO: At 2021-11-13 00:59:39 +0000 UTC - event for execpod-affinitytf9zk: {kubelet node2} Killing: Stopping container agnhost-container Nov 13 00:59:51.479: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 00:59:51.479: INFO: Nov 13 00:59:51.483: INFO: Logging node info for node master1 Nov 13 00:59:51.486: INFO: Node Info: &Node{ObjectMeta:{master1 56d66c54-e52b-494a-a758-e4b658c4b245 69368 0 2021-11-12 21:05:50 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:49 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:49 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:49 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 00:59:49 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 00:59:51.486: INFO: Logging kubelet events for node master1 Nov 13 00:59:51.488: INFO: Logging pods the kubelet thinks is on node master1 Nov 13 00:59:51.515: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 00:59:51.515: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 00:59:51.515: INFO: Container node-exporter ready: true, restart count 0 Nov 13 00:59:51.515: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.515: INFO: Container kube-scheduler ready: true, restart count 0 Nov 13 00:59:51.515: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.515: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 00:59:51.515: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 00:59:51.515: INFO: Init container install-cni ready: true, restart count 0 Nov 13 00:59:51.515: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 00:59:51.515: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.515: INFO: Container kube-multus ready: true, restart count 1 Nov 13 00:59:51.515: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.515: INFO: Container coredns ready: true, restart count 2 Nov 13 00:59:51.515: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.515: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 00:59:51.515: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.515: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 00:59:51.515: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded) Nov 13 00:59:51.515: INFO: Container docker-registry ready: true, restart count 0 Nov 13 00:59:51.515: INFO: Container nginx ready: true, restart count 0 W1113 00:59:51.536422 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 00:59:51.617: INFO: Latency metrics for node master1 Nov 13 00:59:51.617: INFO: Logging node info for node master2 Nov 13 00:59:51.619: INFO: Node Info: &Node{ObjectMeta:{master2 9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 69242 0 2021-11-12 21:06:20 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:46 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:46 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:46 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 00:59:46 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 00:59:51.620: INFO: Logging kubelet events for node master2 Nov 13 00:59:51.622: INFO: Logging pods the kubelet thinks is on node master2 Nov 13 00:59:51.637: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 00:59:51.637: INFO: Init container install-cni ready: true, restart count 0 Nov 13 00:59:51.637: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 00:59:51.637: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.637: INFO: Container kube-multus ready: true, restart count 1 Nov 13 00:59:51.637: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.637: INFO: Container coredns ready: true, restart count 1 Nov 13 00:59:51.637: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 00:59:51.637: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 00:59:51.637: INFO: Container node-exporter ready: true, restart count 0 Nov 13 00:59:51.637: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.637: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 00:59:51.637: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.637: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 00:59:51.637: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.637: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 00:59:51.637: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.637: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 00:59:51.637: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.637: INFO: Container nfd-controller ready: true, restart count 0 W1113 00:59:51.652841 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 00:59:51.722: INFO: Latency metrics for node master2 Nov 13 00:59:51.722: INFO: Logging node info for node master3 Nov 13 00:59:51.725: INFO: Node Info: &Node{ObjectMeta:{master3 fce0cd54-e4d8-4ce1-b720-522aad2d7989 69399 0 2021-11-12 21:06:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:50 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:50 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:50 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 00:59:50 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 00:59:51.726: INFO: Logging kubelet events for node master3 Nov 13 00:59:51.727: INFO: Logging pods the kubelet thinks is on node master3 Nov 13 00:59:51.741: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 00:59:51.741: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 00:59:51.741: INFO: Container node-exporter ready: true, restart count 0 Nov 13 00:59:51.741: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.741: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 00:59:51.741: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.741: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 13 00:59:51.741: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.741: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 00:59:51.741: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.741: INFO: Container autoscaler ready: true, restart count 1 Nov 13 00:59:51.741: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.741: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 00:59:51.741: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 00:59:51.741: INFO: Init container install-cni ready: true, restart count 0 Nov 13 00:59:51.741: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 00:59:51.741: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:51.741: INFO: Container kube-multus ready: true, restart count 1 W1113 00:59:51.757487 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 00:59:51.827: INFO: Latency metrics for node master3 Nov 13 00:59:51.827: INFO: Logging node info for node node1 Nov 13 00:59:51.830: INFO: Node Info: &Node{ObjectMeta:{node1 6ceb907c-9809-4d18-88c6-b1e10ba80f97 69015 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:42 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:42 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:42 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 00:59:42 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 00:59:51.831: INFO: Logging kubelet events for node node1 Nov 13 00:59:51.833: INFO: Logging pods the kubelet thinks is on node node1 Nov 13 00:59:53.401: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded) Nov 13 00:59:53.401: INFO: Container discover ready: false, restart count 0 Nov 13 00:59:53.401: INFO: Container init ready: false, restart count 0 Nov 13 00:59:53.401: INFO: Container install ready: false, restart count 0 Nov 13 00:59:53.401: INFO: simpletest.rc-wx56l started at (0+0 container statuses recorded) Nov 13 00:59:53.401: INFO: simpletest.rc-vfjb4 started at (0+0 container statuses recorded) Nov 13 00:59:53.401: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:53.401: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 00:59:53.401: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:53.401: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 00:59:53.401: INFO: simpletest-rc-to-be-deleted-jlpt2 started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:53.401: INFO: Container nginx ready: true, restart count 0 Nov 13 00:59:53.401: INFO: affinity-nodeport-transition-d9vns started at 2021-11-13 00:57:30 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:53.401: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Nov 13 00:59:53.401: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded) Nov 13 00:59:53.402: INFO: Container config-reloader ready: true, restart count 0 Nov 13 00:59:53.402: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 00:59:53.402: INFO: Container grafana ready: true, restart count 0 Nov 13 00:59:53.402: INFO: Container prometheus ready: true, restart count 1 Nov 13 00:59:53.402: INFO: test-recreate-deployment-6cb8b65c46-xg7mv started at (0+0 container statuses recorded) Nov 13 00:59:53.402: INFO: externalname-service-szcjp started at 2021-11-13 00:58:14 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:53.402: INFO: Container externalname-service ready: true, restart count 0 Nov 13 00:59:53.402: INFO: affinity-nodeport-transition-m7np5 started at 2021-11-13 00:57:30 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:53.402: INFO: Container affinity-nodeport-transition ready: false, restart count 0 Nov 13 00:59:53.402: INFO: webhook-to-be-mutated started at (0+0 container statuses recorded) Nov 13 00:59:53.402: INFO: simpletest.rc-bhfm2 started at (0+0 container statuses recorded) Nov 13 00:59:53.402: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:53.402: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 00:59:53.402: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 00:59:53.402: INFO: Init container install-cni ready: true, restart count 2 Nov 13 00:59:53.402: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 00:59:53.402: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:53.402: INFO: Container kube-multus ready: true, restart count 1 Nov 13 00:59:53.402: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded) Nov 13 00:59:53.402: INFO: Container nodereport ready: true, restart count 0 Nov 13 00:59:53.402: INFO: Container reconcile ready: true, restart count 0 Nov 13 00:59:53.402: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 00:59:53.402: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 00:59:53.402: INFO: Container node-exporter ready: true, restart count 0 Nov 13 00:59:53.402: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded) Nov 13 00:59:53.402: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 00:59:53.402: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 00:59:53.402: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 00:59:53.402: INFO: Container collectd ready: true, restart count 0 Nov 13 00:59:53.402: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 00:59:53.402: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 00:59:53.402: INFO: simpletest.rc-xz246 started at (0+0 container statuses recorded) Nov 13 00:59:53.402: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:53.402: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 00:59:53.402: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:53.402: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 00:59:53.402: INFO: simpletest.rc-6zj7b started at (0+0 container statuses recorded) Nov 13 00:59:53.402: INFO: externalname-service-6nfb8 started at 2021-11-13 00:58:14 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:53.402: INFO: Container externalname-service ready: true, restart count 0 W1113 00:59:53.418747 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 00:59:54.176: INFO: Latency metrics for node node1 Nov 13 00:59:54.176: INFO: Logging node info for node node2 Nov 13 00:59:54.179: INFO: Node Info: &Node{ObjectMeta:{node2 652722dd-12b1-4529-ba4d-a00c590e4a68 69243 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-12 21:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:46 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:46 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 00:59:46 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 00:59:46 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 00:59:54.180: INFO: Logging kubelet events for node node2 Nov 13 00:59:54.182: INFO: Logging pods the kubelet thinks is on node node2 Nov 13 00:59:54.201: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded) Nov 13 00:59:54.201: INFO: Container nodereport ready: true, restart count 0 Nov 13 00:59:54.201: INFO: Container reconcile ready: true, restart count 0 Nov 13 00:59:54.201: INFO: simpletest.rc-fv568 started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.201: INFO: Container nginx ready: false, restart count 0 Nov 13 00:59:54.201: INFO: execpodwxlzs started at 2021-11-13 00:58:19 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.201: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 00:59:54.201: INFO: pod-logs-websocket-aa85e2e6-dbb8-4b36-9629-3aa41fb6f536 started at 2021-11-13 00:59:12 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.201: INFO: Container main ready: true, restart count 0 Nov 13 00:59:54.201: INFO: simpletest-rc-to-be-deleted-c99mp started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.201: INFO: Container nginx ready: false, restart count 0 Nov 13 00:59:54.201: INFO: simpletest-rc-to-be-deleted-6fjqv started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.201: INFO: Container nginx ready: false, restart count 0 Nov 13 00:59:54.201: INFO: fail-once-local-kmzl5 started at 2021-11-13 00:59:47 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.201: INFO: Container c ready: false, restart count 0 Nov 13 00:59:54.201: INFO: simpletest.rc-q8w5r started at (0+0 container statuses recorded) Nov 13 00:59:54.201: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded) Nov 13 00:59:54.201: INFO: Container discover ready: false, restart count 0 Nov 13 00:59:54.201: INFO: Container init ready: false, restart count 0 Nov 13 00:59:54.201: INFO: Container install ready: false, restart count 0 Nov 13 00:59:54.201: INFO: labelsupdate6964b44a-cf59-43b4-a31c-b74d75ea04ed started at 2021-11-13 00:59:39 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.201: INFO: Container client-container ready: true, restart count 0 Nov 13 00:59:54.201: INFO: forbid-27279415-nj4vx started at 2021-11-13 00:55:00 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.201: INFO: Container c ready: true, restart count 0 Nov 13 00:59:54.201: INFO: affinity-nodeport-transition-sf4hv started at 2021-11-13 00:57:30 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.201: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Nov 13 00:59:54.201: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.201: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 00:59:54.202: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 00:59:54.202: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 00:59:54.202: INFO: Container node-exporter ready: true, restart count 0 Nov 13 00:59:54.202: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.202: INFO: Container tas-extender ready: true, restart count 0 Nov 13 00:59:54.202: INFO: pod-init-a1bcc87a-098f-40ea-ba38-3b5f50c8b389 started at 2021-11-13 00:59:39 +0000 UTC (2+1 container statuses recorded) Nov 13 00:59:54.202: INFO: Init container init1 ready: true, restart count 0 Nov 13 00:59:54.202: INFO: Init container init2 ready: true, restart count 0 Nov 13 00:59:54.202: INFO: Container run1 ready: true, restart count 0 Nov 13 00:59:54.202: INFO: simpletest-rc-to-be-deleted-86xml started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.202: INFO: Container nginx ready: false, restart count 0 Nov 13 00:59:54.202: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.202: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 00:59:54.202: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.202: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 00:59:54.202: INFO: simpletest.rc-4j92t started at (0+0 container statuses recorded) Nov 13 00:59:54.202: INFO: simpletest.rc-bz29t started at (0+0 container statuses recorded) Nov 13 00:59:54.202: INFO: simpletest-rc-to-be-deleted-45xcv started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.202: INFO: Container nginx ready: false, restart count 0 Nov 13 00:59:54.202: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.202: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 00:59:54.202: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 00:59:54.202: INFO: Init container install-cni ready: true, restart count 2 Nov 13 00:59:54.202: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 00:59:54.202: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.202: INFO: Container kube-multus ready: true, restart count 1 Nov 13 00:59:54.202: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.202: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 00:59:54.202: INFO: sample-webhook-deployment-78988fc6cd-hwt9l started at 2021-11-13 00:59:38 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.202: INFO: Container sample-webhook ready: true, restart count 0 Nov 13 00:59:54.202: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.202: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 00:59:54.202: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 00:59:54.202: INFO: Container collectd ready: true, restart count 0 Nov 13 00:59:54.202: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 00:59:54.202: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 00:59:54.202: INFO: fail-once-local-bd4gs started at 2021-11-13 00:59:47 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.202: INFO: Container c ready: false, restart count 0 Nov 13 00:59:54.202: INFO: simpletest.rc-fn44r started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 00:59:54.202: INFO: Container nginx ready: false, restart count 0 W1113 00:59:54.214515 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 00:59:56.029: INFO: Latency metrics for node node2 Nov 13 00:59:56.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7899" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [154.525 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:59:39.182: Unexpected error: <*errors.errorString | 0xc003c48970>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30177 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30177 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":127,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:56.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:56.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1149" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:38.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 00:59:38.874: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 00:59:40.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361978, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361978, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361978, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361978, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:59:42.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361978, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361978, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361978, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361978, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 00:59:45.897: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 00:59:57.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8048" for this suite. STEP: Destroying namespace "webhook-8048-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.686 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":21,"skipped":400,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:54:37.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1113 00:54:37.286071 26 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:01.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-1575" for this suite. • [SLOW TEST:324.072 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":4,"skipped":72,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:53.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:59:53.274: INFO: Creating deployment "test-recreate-deployment" Nov 13 00:59:53.277: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Nov 13 00:59:53.282: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Nov 13 00:59:55.288: INFO: Waiting deployment "test-recreate-deployment" to complete Nov 13 00:59:55.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:59:57.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 00:59:59.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:01.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:03.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772361993, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:05.294: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Nov 13 01:00:05.301: INFO: Updating deployment test-recreate-deployment Nov 13 01:00:05.301: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Nov 13 01:00:05.340: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9681 689bfeb5-b5f8-418a-a050-4223533be41a 69981 2 2021-11-13 00:59:53 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-11-13 01:00:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-13 01:00:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001b32bf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-11-13 01:00:05 +0000 UTC,LastTransitionTime:2021-11-13 01:00:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-11-13 01:00:05 +0000 UTC,LastTransitionTime:2021-11-13 00:59:53 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Nov 13 01:00:05.343: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-9681 7162ef3e-9393-4b48-8d60-7f9058e2e23c 69980 1 2021-11-13 01:00:05 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 689bfeb5-b5f8-418a-a050-4223533be41a 0xc001b33090 0xc001b33091}] [] [{kube-controller-manager Update apps/v1 2021-11-13 01:00:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"689bfeb5-b5f8-418a-a050-4223533be41a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001b33108 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 13 01:00:05.343: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Nov 13 01:00:05.343: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-9681 b492d485-9066-46d2-bbc4-ff4158eafda6 69970 2 2021-11-13 00:59:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 689bfeb5-b5f8-418a-a050-4223533be41a 0xc001b32f97 0xc001b32f98}] [] [{kube-controller-manager Update apps/v1 2021-11-13 01:00:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"689bfeb5-b5f8-418a-a050-4223533be41a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001b33028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 13 01:00:05.346: INFO: Pod "test-recreate-deployment-85d47dcb4-fnfbn" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-fnfbn test-recreate-deployment-85d47dcb4- deployment-9681 670803a5-6513-4cda-8bf9-65cadc197cef 69976 0 2021-11-13 01:00:05 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 7162ef3e-9393-4b48-8d60-7f9058e2e23c 0xc001b3359f 0xc001b335b0}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7162ef3e-9393-4b48-8d60-7f9058e2e23c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-blcp7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-blcp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:05.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9681" for this suite. • [SLOW TEST:12.104 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":27,"skipped":322,"failed":0} SSSS ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":127,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:56.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Nov 13 00:59:56.143: INFO: Waiting up to 5m0s for pod "downward-api-6127e03a-0b46-45c0-afb1-19966ef5fe63" in namespace "downward-api-6928" to be "Succeeded or Failed" Nov 13 00:59:56.145: INFO: Pod "downward-api-6127e03a-0b46-45c0-afb1-19966ef5fe63": Phase="Pending", Reason="", readiness=false. Elapsed: 1.913605ms Nov 13 00:59:58.148: INFO: Pod "downward-api-6127e03a-0b46-45c0-afb1-19966ef5fe63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004453611s Nov 13 01:00:00.151: INFO: Pod "downward-api-6127e03a-0b46-45c0-afb1-19966ef5fe63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007779253s Nov 13 01:00:02.155: INFO: Pod "downward-api-6127e03a-0b46-45c0-afb1-19966ef5fe63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012066517s Nov 13 01:00:04.160: INFO: Pod "downward-api-6127e03a-0b46-45c0-afb1-19966ef5fe63": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016672286s Nov 13 01:00:06.164: INFO: Pod "downward-api-6127e03a-0b46-45c0-afb1-19966ef5fe63": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021178976s Nov 13 01:00:08.170: INFO: Pod "downward-api-6127e03a-0b46-45c0-afb1-19966ef5fe63": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026470667s Nov 13 01:00:10.173: INFO: Pod "downward-api-6127e03a-0b46-45c0-afb1-19966ef5fe63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.02986468s STEP: Saw pod success Nov 13 01:00:10.173: INFO: Pod "downward-api-6127e03a-0b46-45c0-afb1-19966ef5fe63" satisfied condition "Succeeded or Failed" Nov 13 01:00:10.175: INFO: Trying to get logs from node node2 pod downward-api-6127e03a-0b46-45c0-afb1-19966ef5fe63 container dapi-container: STEP: delete the pod Nov 13 01:00:10.188: INFO: Waiting for pod downward-api-6127e03a-0b46-45c0-afb1-19966ef5fe63 to disappear Nov 13 01:00:10.190: INFO: Pod downward-api-6127e03a-0b46-45c0-afb1-19966ef5fe63 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:10.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6928" for this suite. • [SLOW TEST:14.088 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":127,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:10.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-32f05587-e091-499e-8521-cd3b600d947e [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:10.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-767" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":14,"skipped":131,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:58.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath Nov 13 00:59:58.211: INFO: Waiting up to 5m0s for pod "var-expansion-857d121e-62d7-4fc8-85a4-49c616953b49" in namespace "var-expansion-3041" to be "Succeeded or Failed" Nov 13 00:59:58.213: INFO: Pod "var-expansion-857d121e-62d7-4fc8-85a4-49c616953b49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030497ms Nov 13 01:00:00.217: INFO: Pod "var-expansion-857d121e-62d7-4fc8-85a4-49c616953b49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006407577s Nov 13 01:00:02.221: INFO: Pod "var-expansion-857d121e-62d7-4fc8-85a4-49c616953b49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010293487s Nov 13 01:00:04.226: INFO: Pod "var-expansion-857d121e-62d7-4fc8-85a4-49c616953b49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014884614s Nov 13 01:00:06.231: INFO: Pod "var-expansion-857d121e-62d7-4fc8-85a4-49c616953b49": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01970613s Nov 13 01:00:08.235: INFO: Pod "var-expansion-857d121e-62d7-4fc8-85a4-49c616953b49": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023620239s Nov 13 01:00:10.238: INFO: Pod "var-expansion-857d121e-62d7-4fc8-85a4-49c616953b49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.026782844s STEP: Saw pod success Nov 13 01:00:10.238: INFO: Pod "var-expansion-857d121e-62d7-4fc8-85a4-49c616953b49" satisfied condition "Succeeded or Failed" Nov 13 01:00:10.241: INFO: Trying to get logs from node node1 pod var-expansion-857d121e-62d7-4fc8-85a4-49c616953b49 container dapi-container: STEP: delete the pod Nov 13 01:00:10.254: INFO: Waiting for pod var-expansion-857d121e-62d7-4fc8-85a4-49c616953b49 to disappear Nov 13 01:00:10.256: INFO: Pod var-expansion-857d121e-62d7-4fc8-85a4-49c616953b49 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:10.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3041" for this suite. • [SLOW TEST:12.090 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":22,"skipped":482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:57:30.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-561 STEP: creating service affinity-nodeport-transition in namespace services-561 STEP: creating replication controller affinity-nodeport-transition in namespace services-561 I1113 00:57:30.063173 30 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-561, replica count: 3 I1113 00:57:33.113669 30 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:57:36.114626 30 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:57:39.115975 30 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 13 00:57:39.127: INFO: Creating new exec pod Nov 13 00:57:46.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Nov 13 00:57:46.529: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Nov 13 00:57:46.530: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 13 00:57:46.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.33.37 80' Nov 13 00:57:47.073: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.33.37 80\nConnection to 10.233.33.37 80 port [tcp/http] succeeded!\n" Nov 13 00:57:47.073: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 13 00:57:47.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:57:47.333: INFO: rc: 1 Nov 13 00:57:47.334: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:48.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:57:48.574: INFO: rc: 1 Nov 13 00:57:48.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:49.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:57:49.602: INFO: rc: 1 Nov 13 00:57:49.602: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:50.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:57:50.607: INFO: rc: 1 Nov 13 00:57:50.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:51.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:57:51.575: INFO: rc: 1 Nov 13 00:57:51.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:52.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:57:52.591: INFO: rc: 1 Nov 13 00:57:52.591: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:53.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:57:53.880: INFO: rc: 1 Nov 13 00:57:53.880: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:54.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:57:54.592: INFO: rc: 1 Nov 13 00:57:54.592: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:55.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:57:55.600: INFO: rc: 1 Nov 13 00:57:55.600: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:56.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:57:56.589: INFO: rc: 1 Nov 13 00:57:56.589: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:57.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:57:58.266: INFO: rc: 1 Nov 13 00:57:58.266: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:58.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:57:58.804: INFO: rc: 1 Nov 13 00:57:58.804: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:57:59.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:57:59.648: INFO: rc: 1 Nov 13 00:57:59.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:00.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:00.643: INFO: rc: 1 Nov 13 00:58:00.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:01.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:01.591: INFO: rc: 1 Nov 13 00:58:01.591: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:02.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:02.862: INFO: rc: 1 Nov 13 00:58:02.862: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:03.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:03.578: INFO: rc: 1 Nov 13 00:58:03.578: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:04.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:04.582: INFO: rc: 1 Nov 13 00:58:04.582: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:05.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:05.567: INFO: rc: 1 Nov 13 00:58:05.567: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:06.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:06.615: INFO: rc: 1 Nov 13 00:58:06.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:07.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:07.651: INFO: rc: 1 Nov 13 00:58:07.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:08.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:08.889: INFO: rc: 1 Nov 13 00:58:08.889: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:09.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:09.580: INFO: rc: 1 Nov 13 00:58:09.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:10.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:10.590: INFO: rc: 1 Nov 13 00:58:10.590: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:11.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:11.578: INFO: rc: 1 Nov 13 00:58:11.578: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:12.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:12.579: INFO: rc: 1 Nov 13 00:58:12.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:13.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:13.679: INFO: rc: 1 Nov 13 00:58:13.679: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:14.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:14.723: INFO: rc: 1 Nov 13 00:58:14.723: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:15.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:15.606: INFO: rc: 1 Nov 13 00:58:15.606: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:16.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:16.663: INFO: rc: 1 Nov 13 00:58:16.663: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:17.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:17.561: INFO: rc: 1 Nov 13 00:58:17.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:18.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:18.692: INFO: rc: 1 Nov 13 00:58:18.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:19.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:19.727: INFO: rc: 1 Nov 13 00:58:19.728: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:20.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:20.718: INFO: rc: 1 Nov 13 00:58:20.719: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:21.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:21.662: INFO: rc: 1 Nov 13 00:58:21.662: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:22.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:22.586: INFO: rc: 1 Nov 13 00:58:22.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:23.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:23.605: INFO: rc: 1 Nov 13 00:58:23.605: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:24.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:24.556: INFO: rc: 1 Nov 13 00:58:24.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:25.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:25.579: INFO: rc: 1 Nov 13 00:58:25.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:26.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:26.575: INFO: rc: 1 Nov 13 00:58:26.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:27.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:28.289: INFO: rc: 1 Nov 13 00:58:28.289: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:28.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:28.797: INFO: rc: 1 Nov 13 00:58:28.797: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:29.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:29.586: INFO: rc: 1 Nov 13 00:58:29.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:30.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:30.638: INFO: rc: 1 Nov 13 00:58:30.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:31.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:31.775: INFO: rc: 1 Nov 13 00:58:31.775: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:32.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:32.739: INFO: rc: 1 Nov 13 00:58:32.739: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:33.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:33.832: INFO: rc: 1 Nov 13 00:58:33.832: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:34.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:34.624: INFO: rc: 1 Nov 13 00:58:34.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:35.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:35.829: INFO: rc: 1 Nov 13 00:58:35.829: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:36.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:36.994: INFO: rc: 1 Nov 13 00:58:36.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:37.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:37.581: INFO: rc: 1 Nov 13 00:58:37.581: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:38.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:38.591: INFO: rc: 1 Nov 13 00:58:38.591: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:39.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:39.592: INFO: rc: 1 Nov 13 00:58:39.592: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:40.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:40.663: INFO: rc: 1 Nov 13 00:58:40.663: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:41.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:41.691: INFO: rc: 1 Nov 13 00:58:41.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:42.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:42.701: INFO: rc: 1 Nov 13 00:58:42.701: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:43.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:43.655: INFO: rc: 1 Nov 13 00:58:43.655: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:44.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:44.596: INFO: rc: 1 Nov 13 00:58:44.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:45.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:45.637: INFO: rc: 1 Nov 13 00:58:45.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:46.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:46.911: INFO: rc: 1 Nov 13 00:58:46.911: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:47.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:47.586: INFO: rc: 1 Nov 13 00:58:47.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:48.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:48.626: INFO: rc: 1 Nov 13 00:58:48.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32708 + echo hostName nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:49.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:49.603: INFO: rc: 1 Nov 13 00:58:49.603: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:50.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:50.618: INFO: rc: 1 Nov 13 00:58:50.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:51.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:51.993: INFO: rc: 1 Nov 13 00:58:51.993: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:52.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:52.633: INFO: rc: 1 Nov 13 00:58:52.633: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:53.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:53.681: INFO: rc: 1 Nov 13 00:58:53.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:54.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:54.840: INFO: rc: 1 Nov 13 00:58:54.840: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:55.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:55.852: INFO: rc: 1 Nov 13 00:58:55.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:56.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:56.780: INFO: rc: 1 Nov 13 00:58:56.780: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:57.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:58.034: INFO: rc: 1 Nov 13 00:58:58.034: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:58.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:58.651: INFO: rc: 1 Nov 13 00:58:58.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:59.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:58:59.627: INFO: rc: 1 Nov 13 00:58:59.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:00.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:00.598: INFO: rc: 1 Nov 13 00:59:00.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:01.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:01.582: INFO: rc: 1 Nov 13 00:59:01.582: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:02.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:02.613: INFO: rc: 1 Nov 13 00:59:02.613: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:03.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:03.640: INFO: rc: 1 Nov 13 00:59:03.640: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:04.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:04.607: INFO: rc: 1 Nov 13 00:59:04.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:05.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:05.601: INFO: rc: 1 Nov 13 00:59:05.601: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:06.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:06.674: INFO: rc: 1 Nov 13 00:59:06.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:07.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:07.585: INFO: rc: 1 Nov 13 00:59:07.585: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:08.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:08.566: INFO: rc: 1 Nov 13 00:59:08.567: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32708 + echo hostName nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:09.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:09.576: INFO: rc: 1 Nov 13 00:59:09.576: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:10.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:10.787: INFO: rc: 1 Nov 13 00:59:10.788: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:11.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:11.581: INFO: rc: 1 Nov 13 00:59:11.581: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:12.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:12.585: INFO: rc: 1 Nov 13 00:59:12.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:13.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:13.885: INFO: rc: 1 Nov 13 00:59:13.885: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:14.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:14.612: INFO: rc: 1 Nov 13 00:59:14.613: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32708 + echo hostName nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:15.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:15.639: INFO: rc: 1 Nov 13 00:59:15.639: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:16.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:16.566: INFO: rc: 1 Nov 13 00:59:16.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:17.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:17.587: INFO: rc: 1 Nov 13 00:59:17.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:18.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:18.618: INFO: rc: 1 Nov 13 00:59:18.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:19.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:19.684: INFO: rc: 1 Nov 13 00:59:19.684: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:20.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:20.580: INFO: rc: 1 Nov 13 00:59:20.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:21.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:21.624: INFO: rc: 1 Nov 13 00:59:21.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:22.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:22.577: INFO: rc: 1 Nov 13 00:59:22.577: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:23.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:23.601: INFO: rc: 1 Nov 13 00:59:23.602: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:24.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:24.634: INFO: rc: 1 Nov 13 00:59:24.634: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:25.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:25.574: INFO: rc: 1 Nov 13 00:59:25.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:26.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:26.623: INFO: rc: 1 Nov 13 00:59:26.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:27.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:28.285: INFO: rc: 1 Nov 13 00:59:28.285: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:28.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:28.653: INFO: rc: 1 Nov 13 00:59:28.653: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:29.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:29.579: INFO: rc: 1 Nov 13 00:59:29.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:30.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:30.636: INFO: rc: 1 Nov 13 00:59:30.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:31.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:31.585: INFO: rc: 1 Nov 13 00:59:31.585: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:32.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:32.599: INFO: rc: 1 Nov 13 00:59:32.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:33.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:33.567: INFO: rc: 1 Nov 13 00:59:33.567: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:34.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:34.599: INFO: rc: 1 Nov 13 00:59:34.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:35.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:35.606: INFO: rc: 1 Nov 13 00:59:35.606: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:36.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:36.613: INFO: rc: 1 Nov 13 00:59:36.613: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:37.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:37.629: INFO: rc: 1 Nov 13 00:59:37.629: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:38.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:38.642: INFO: rc: 1 Nov 13 00:59:38.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:39.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:39.817: INFO: rc: 1 Nov 13 00:59:39.817: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:40.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:40.736: INFO: rc: 1 Nov 13 00:59:40.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:41.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:41.577: INFO: rc: 1 Nov 13 00:59:41.577: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:42.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:43.050: INFO: rc: 1 Nov 13 00:59:43.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:43.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:43.595: INFO: rc: 1 Nov 13 00:59:43.595: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32708 + echo hostName nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:44.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:44.612: INFO: rc: 1 Nov 13 00:59:44.612: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:45.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:45.585: INFO: rc: 1 Nov 13 00:59:45.585: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:46.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:46.605: INFO: rc: 1 Nov 13 00:59:46.605: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:47.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:47.666: INFO: rc: 1 Nov 13 00:59:47.666: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:47.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708' Nov 13 00:59:48.172: INFO: rc: 1 Nov 13 00:59:48.172: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-561 exec execpod-affinitynmft4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32708: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32708 nc: connect to 10.10.190.207 port 32708 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:48.173: FAIL: Unexpected error: <*errors.errorString | 0xc00365c3d0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32708 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32708 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001597340, 0x779f8f8, 0xc00244a000, 0xc00006b180, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2527 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001519500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001519500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001519500, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Nov 13 00:59:48.174: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-561, will wait for the garbage collector to delete the pods Nov 13 00:59:48.251: INFO: Deleting ReplicationController affinity-nodeport-transition took: 3.999604ms Nov 13 00:59:48.352: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.730587ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-561". STEP: Found 27 events. Nov 13 01:00:12.768: INFO: At 2021-11-13 00:57:30 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-sf4hv Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:30 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-d9vns Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:30 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-m7np5 Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:30 +0000 UTC - event for affinity-nodeport-transition-d9vns: {default-scheduler } Scheduled: Successfully assigned services-561/affinity-nodeport-transition-d9vns to node1 Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:30 +0000 UTC - event for affinity-nodeport-transition-m7np5: {default-scheduler } Scheduled: Successfully assigned services-561/affinity-nodeport-transition-m7np5 to node1 Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:30 +0000 UTC - event for affinity-nodeport-transition-sf4hv: {default-scheduler } Scheduled: Successfully assigned services-561/affinity-nodeport-transition-sf4hv to node2 Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:31 +0000 UTC - event for affinity-nodeport-transition-d9vns: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:31 +0000 UTC - event for affinity-nodeport-transition-sf4hv: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:32 +0000 UTC - event for affinity-nodeport-transition-m7np5: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:32 +0000 UTC - event for affinity-nodeport-transition-sf4hv: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 283.773281ms Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:32 +0000 UTC - event for affinity-nodeport-transition-sf4hv: {kubelet node2} Created: Created container affinity-nodeport-transition Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:33 +0000 UTC - event for affinity-nodeport-transition-sf4hv: {kubelet node2} Started: Started container affinity-nodeport-transition Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:36 +0000 UTC - event for affinity-nodeport-transition-d9vns: {kubelet node1} Created: Created container affinity-nodeport-transition Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:36 +0000 UTC - event for affinity-nodeport-transition-d9vns: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 4.648368272s Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:36 +0000 UTC - event for affinity-nodeport-transition-m7np5: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 4.11132108s Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:37 +0000 UTC - event for affinity-nodeport-transition-d9vns: {kubelet node1} Started: Started container affinity-nodeport-transition Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:37 +0000 UTC - event for affinity-nodeport-transition-m7np5: {kubelet node1} Created: Created container affinity-nodeport-transition Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:37 +0000 UTC - event for affinity-nodeport-transition-m7np5: {kubelet node1} Started: Started container affinity-nodeport-transition Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:39 +0000 UTC - event for execpod-affinitynmft4: {default-scheduler } Scheduled: Successfully assigned services-561/execpod-affinitynmft4 to node2 Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:40 +0000 UTC - event for execpod-affinitynmft4: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:41 +0000 UTC - event for execpod-affinitynmft4: {kubelet node2} Started: Started container agnhost-container Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:41 +0000 UTC - event for execpod-affinitynmft4: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 275.408412ms Nov 13 01:00:12.769: INFO: At 2021-11-13 00:57:41 +0000 UTC - event for execpod-affinitynmft4: {kubelet node2} Created: Created container agnhost-container Nov 13 01:00:12.769: INFO: At 2021-11-13 00:59:48 +0000 UTC - event for affinity-nodeport-transition-d9vns: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Nov 13 01:00:12.769: INFO: At 2021-11-13 00:59:48 +0000 UTC - event for affinity-nodeport-transition-m7np5: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Nov 13 01:00:12.769: INFO: At 2021-11-13 00:59:48 +0000 UTC - event for affinity-nodeport-transition-sf4hv: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Nov 13 01:00:12.769: INFO: At 2021-11-13 00:59:48 +0000 UTC - event for execpod-affinitynmft4: {kubelet node2} Killing: Stopping container agnhost-container Nov 13 01:00:12.771: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:00:12.771: INFO: Nov 13 01:00:12.775: INFO: Logging node info for node master1 Nov 13 01:00:12.777: INFO: Node Info: &Node{ObjectMeta:{master1 56d66c54-e52b-494a-a758-e4b658c4b245 70084 0 2021-11-12 21:05:50 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:09 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:09 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:09 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:00:09 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:00:12.778: INFO: Logging kubelet events for node master1 Nov 13 01:00:12.780: INFO: Logging pods the kubelet thinks is on node master1 Nov 13 01:00:12.802: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:12.802: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:00:12.802: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:12.802: INFO: Container docker-registry ready: true, restart count 0 Nov 13 01:00:12.802: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:12.802: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:12.802: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 01:00:12.802: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:00:12.802: INFO: Init container install-cni ready: true, restart count 0 Nov 13 01:00:12.802: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 01:00:12.802: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:12.802: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:00:12.802: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:12.802: INFO: Container coredns ready: true, restart count 2 Nov 13 01:00:12.802: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:12.802: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:00:12.802: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:00:12.802: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:12.802: INFO: Container kube-scheduler ready: true, restart count 0 Nov 13 01:00:12.802: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:12.802: INFO: Container kube-controller-manager ready: true, restart count 2 W1113 01:00:12.816649 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:00:12.883: INFO: Latency metrics for node master1 Nov 13 01:00:12.883: INFO: Logging node info for node master2 Nov 13 01:00:12.886: INFO: Node Info: &Node{ObjectMeta:{master2 9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 70025 0 2021-11-12 21:06:20 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:06 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:06 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:06 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:00:06 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:00:12.887: INFO: Logging kubelet events for node master2 Nov 13 01:00:12.889: INFO: Logging pods the kubelet thinks is on node master2 Nov 13 01:00:12.898: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:12.898: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:00:12.898: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:00:12.898: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:12.898: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 01:00:12.898: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:12.898: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 01:00:12.898: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:12.898: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 01:00:12.898: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:00:12.898: INFO: Init container install-cni ready: true, restart count 0 Nov 13 01:00:12.898: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 01:00:12.898: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:12.898: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:00:12.898: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:12.898: INFO: Container coredns ready: true, restart count 1 Nov 13 01:00:12.898: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:12.898: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 01:00:12.898: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:12.898: INFO: Container nfd-controller ready: true, restart count 0 W1113 01:00:12.913768 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:00:12.986: INFO: Latency metrics for node master2 Nov 13 01:00:12.986: INFO: Logging node info for node master3 Nov 13 01:00:12.990: INFO: Node Info: &Node{ObjectMeta:{master3 fce0cd54-e4d8-4ce1-b720-522aad2d7989 70169 0 2021-11-12 21:06:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:10 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:10 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:10 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:00:10 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:00:12.990: INFO: Logging kubelet events for node master3 Nov 13 01:00:12.993: INFO: Logging pods the kubelet thinks is on node master3 Nov 13 01:00:13.001: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.001: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 01:00:13.001: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:13.001: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:00:13.001: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:00:13.001: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.001: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 01:00:13.001: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.001: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 13 01:00:13.001: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.001: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:00:13.001: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.001: INFO: Container autoscaler ready: true, restart count 1 Nov 13 01:00:13.001: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.001: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:00:13.001: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:00:13.001: INFO: Init container install-cni ready: true, restart count 0 Nov 13 01:00:13.001: INFO: Container kube-flannel ready: true, restart count 1 W1113 01:00:13.015400 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:00:13.086: INFO: Latency metrics for node master3 Nov 13 01:00:13.086: INFO: Logging node info for node node1 Nov 13 01:00:13.089: INFO: Node Info: &Node{ObjectMeta:{node1 6ceb907c-9809-4d18-88c6-b1e10ba80f97 69926 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:03 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:03 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:03 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:00:03 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:00:13.090: INFO: Logging kubelet events for node node1 Nov 13 01:00:13.093: INFO: Logging pods the kubelet thinks is on node node1 Nov 13 01:00:13.112: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.112: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 01:00:13.112: INFO: simpletest-rc-to-be-deleted-jlpt2 started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.112: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:13.112: INFO: webserver-deployment-847dcfb7fb-jqzfk started at 2021-11-13 01:00:01 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.112: INFO: Container httpd ready: true, restart count 0 Nov 13 01:00:13.112: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.112: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:00:13.112: INFO: externalname-service-szcjp started at 2021-11-13 00:58:14 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.112: INFO: Container externalname-service ready: true, restart count 0 Nov 13 01:00:13.112: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded) Nov 13 01:00:13.112: INFO: Container config-reloader ready: true, restart count 0 Nov 13 01:00:13.112: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 01:00:13.112: INFO: Container grafana ready: true, restart count 0 Nov 13 01:00:13.112: INFO: Container prometheus ready: true, restart count 1 Nov 13 01:00:13.112: INFO: simpletest.rc-bhfm2 started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.112: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:13.112: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:00:13.112: INFO: Init container install-cni ready: true, restart count 2 Nov 13 01:00:13.112: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 01:00:13.112: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.112: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:00:13.112: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:13.112: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:00:13.112: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:00:13.113: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:13.113: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:00:13.113: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:00:13.113: INFO: webserver-deployment-847dcfb7fb-z2c9g started at 2021-11-13 01:00:01 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container httpd ready: true, restart count 0 Nov 13 01:00:13.113: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:00:13.113: INFO: test-rolling-update-controller-27ttc started at (0+0 container statuses recorded) Nov 13 01:00:13.113: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:13.113: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:00:13.113: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 01:00:13.113: INFO: simpletest.rc-xz246 started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:13.113: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 01:00:13.113: INFO: Container collectd ready: true, restart count 0 Nov 13 01:00:13.113: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:00:13.113: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:00:13.113: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:00:13.113: INFO: simpletest.rc-6zj7b started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:13.113: INFO: externalname-service-6nfb8 started at 2021-11-13 00:58:14 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container externalname-service ready: true, restart count 0 Nov 13 01:00:13.113: INFO: webserver-deployment-847dcfb7fb-jw27t started at 2021-11-13 01:00:01 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container httpd ready: false, restart count 0 Nov 13 01:00:13.113: INFO: webserver-deployment-847dcfb7fb-tb7pb started at 2021-11-13 01:00:01 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container httpd ready: false, restart count 0 Nov 13 01:00:13.113: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 01:00:13.113: INFO: simpletest.rc-wx56l started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:13.113: INFO: simpletest.rc-vfjb4 started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:13.113: INFO: concurrent-27279420-g8d8r started at 2021-11-13 01:00:00 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container c ready: true, restart count 0 Nov 13 01:00:13.113: INFO: webserver-deployment-847dcfb7fb-m269s started at 2021-11-13 01:00:01 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container httpd ready: false, restart count 0 Nov 13 01:00:13.113: INFO: webserver-deployment-847dcfb7fb-s8c8t started at 2021-11-13 01:00:01 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container httpd ready: true, restart count 0 Nov 13 01:00:13.113: INFO: test-rollover-controller-2wjrq started at 2021-11-13 01:00:05 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.113: INFO: Container httpd ready: false, restart count 0 Nov 13 01:00:13.113: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded) Nov 13 01:00:13.113: INFO: Container discover ready: false, restart count 0 Nov 13 01:00:13.113: INFO: Container init ready: false, restart count 0 Nov 13 01:00:13.113: INFO: Container install ready: false, restart count 0 W1113 01:00:13.129588 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:00:13.555: INFO: Latency metrics for node node1 Nov 13 01:00:13.555: INFO: Logging node info for node node2 Nov 13 01:00:13.559: INFO: Node Info: &Node{ObjectMeta:{node2 652722dd-12b1-4529-ba4d-a00c590e4a68 70057 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-12 21:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:08 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:08 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:08 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:00:08 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:00:13.560: INFO: Logging kubelet events for node node2 Nov 13 01:00:13.562: INFO: Logging pods the kubelet thinks is on node node2 Nov 13 01:00:13.586: INFO: webserver-deployment-847dcfb7fb-6fvht started at 2021-11-13 01:00:01 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.586: INFO: Container httpd ready: false, restart count 0 Nov 13 01:00:13.586: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.586: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 01:00:13.586: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:13.586: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:00:13.586: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:00:13.586: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.586: INFO: Container tas-extender ready: true, restart count 0 Nov 13 01:00:13.586: INFO: fail-once-local-lw9rd started at (0+0 container statuses recorded) Nov 13 01:00:13.586: INFO: simpletest-rc-to-be-deleted-86xml started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.586: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:13.586: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.586: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:00:13.586: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.586: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:00:13.586: INFO: webserver-deployment-847dcfb7fb-dj86p started at 2021-11-13 01:00:01 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container httpd ready: false, restart count 0 Nov 13 01:00:13.587: INFO: simpletest.rc-4j92t started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:13.587: INFO: simpletest.rc-bz29t started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container nginx ready: false, restart count 0 Nov 13 01:00:13.587: INFO: simpletest-rc-to-be-deleted-45xcv started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:13.587: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:00:13.587: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Init container install-cni ready: true, restart count 2 Nov 13 01:00:13.587: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 01:00:13.587: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:00:13.587: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 01:00:13.587: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:00:13.587: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 01:00:13.587: INFO: Container collectd ready: true, restart count 0 Nov 13 01:00:13.587: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:00:13.587: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:00:13.587: INFO: fail-once-local-bd4gs started at 2021-11-13 00:59:47 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container c ready: false, restart count 1 Nov 13 01:00:13.587: INFO: simpletest.rc-fn44r started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:13.587: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:13.587: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:00:13.587: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:00:13.587: INFO: simpletest.rc-fv568 started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container nginx ready: false, restart count 0 Nov 13 01:00:13.587: INFO: webserver-deployment-847dcfb7fb-k8mvk started at 2021-11-13 01:00:01 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container httpd ready: false, restart count 0 Nov 13 01:00:13.587: INFO: execpodwxlzs started at 2021-11-13 00:58:19 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 01:00:13.587: INFO: simpletest-rc-to-be-deleted-c99mp started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:13.587: INFO: simpletest-rc-to-be-deleted-6fjqv started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:13.587: INFO: fail-once-local-kmzl5 started at 2021-11-13 00:59:47 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container c ready: false, restart count 0 Nov 13 01:00:13.587: INFO: simpletest.rc-q8w5r started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:13.587: INFO: webserver-deployment-847dcfb7fb-6fglb started at 2021-11-13 01:00:01 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:13.587: INFO: Container httpd ready: true, restart count 0 Nov 13 01:00:13.587: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded) Nov 13 01:00:13.587: INFO: Container discover ready: false, restart count 0 Nov 13 01:00:13.587: INFO: Container init ready: false, restart count 0 Nov 13 01:00:13.587: INFO: Container install ready: false, restart count 0 W1113 01:00:13.601892 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:00:13.936: INFO: Latency metrics for node node2 Nov 13 01:00:13.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-561" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [163.917 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 00:59:48.173: Unexpected error: <*errors.errorString | 0xc00365c3d0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32708 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32708 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":18,"skipped":358,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:10.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:17.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2702" for this suite. • [SLOW TEST:7.044 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":23,"skipped":496,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:01.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:00:01.360: INFO: Creating deployment "webserver-deployment" Nov 13 01:00:01.363: INFO: Waiting for observed generation 1 Nov 13 01:00:03.369: INFO: Waiting for all required pods to come up Nov 13 01:00:03.373: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Nov 13 01:00:17.380: INFO: Waiting for deployment "webserver-deployment" to complete Nov 13 01:00:17.384: INFO: Updating deployment "webserver-deployment" with a non-existent image Nov 13 01:00:17.390: INFO: Updating deployment webserver-deployment Nov 13 01:00:17.390: INFO: Waiting for observed generation 2 Nov 13 01:00:19.398: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Nov 13 01:00:19.400: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Nov 13 01:00:19.403: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Nov 13 01:00:19.411: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Nov 13 01:00:19.411: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Nov 13 01:00:19.413: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Nov 13 01:00:19.417: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Nov 13 01:00:19.417: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Nov 13 01:00:19.423: INFO: Updating deployment webserver-deployment Nov 13 01:00:19.423: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Nov 13 01:00:19.427: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Nov 13 01:00:19.430: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Nov 13 01:00:19.436: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6227 6502ad89-487b-49ab-960b-5b40a0587fb7 70469 3 2021-11-13 01:00:01 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-11-13 01:00:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-13 01:00:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000ca06f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-11-13 01:00:15 +0000 UTC,LastTransitionTime:2021-11-13 01:00:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-11-13 01:00:17 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Nov 13 01:00:19.439: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-6227 16fcb8fc-49dd-4cbc-8c74-211c45878136 70472 3 2021-11-13 01:00:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 6502ad89-487b-49ab-960b-5b40a0587fb7 0xc000ca1057 0xc000ca1058}] [] [{kube-controller-manager Update apps/v1 2021-11-13 01:00:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6502ad89-487b-49ab-960b-5b40a0587fb7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000ca1168 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 13 01:00:19.439: INFO: All old ReplicaSets of Deployment "webserver-deployment": Nov 13 01:00:19.439: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-6227 43d61e07-2545-4943-853a-36d182a60efc 70470 3 2021-11-13 01:00:01 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 6502ad89-487b-49ab-960b-5b40a0587fb7 0xc000ca11e7 0xc000ca11e8}] [] [{kube-controller-manager Update apps/v1 2021-11-13 01:00:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6502ad89-487b-49ab-960b-5b40a0587fb7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000ca1328 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Nov 13 01:00:19.445: INFO: Pod "webserver-deployment-795d758f88-4s94g" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4s94g webserver-deployment-795d758f88- deployment-6227 ac6e421c-d3cf-483e-895c-2924475ddc02 70356 0 2021-11-13 01:00:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 16fcb8fc-49dd-4cbc-8c74-211c45878136 0xc00101656f 0xc001016580}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16fcb8fc-49dd-4cbc-8c74-211c45878136\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ql2r7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ql2r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.445: INFO: Pod "webserver-deployment-795d758f88-9p9f5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9p9f5 webserver-deployment-795d758f88- deployment-6227 2ce44d5c-9b11-4c7c-af73-2aa6d227d3c6 70479 0 2021-11-13 01:00:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 16fcb8fc-49dd-4cbc-8c74-211c45878136 0xc00101670f 0xc001016720}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16fcb8fc-49dd-4cbc-8c74-211c45878136\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xksnn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xksnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.445: INFO: Pod "webserver-deployment-795d758f88-bxrls" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bxrls webserver-deployment-795d758f88- deployment-6227 82e573c7-4e4e-4575-b734-2904de9acbdf 70375 0 2021-11-13 01:00:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 16fcb8fc-49dd-4cbc-8c74-211c45878136 0xc00101688f 0xc0010168a0}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16fcb8fc-49dd-4cbc-8c74-211c45878136\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wxv79,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wxv79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.446: INFO: Pod "webserver-deployment-795d758f88-c2nhs" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-c2nhs webserver-deployment-795d758f88- deployment-6227 7223a5a0-76c7-43db-8431-4d59c06bf96b 70380 0 2021-11-13 01:00:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 16fcb8fc-49dd-4cbc-8c74-211c45878136 0xc001016a0f 0xc001016a20}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16fcb8fc-49dd-4cbc-8c74-211c45878136\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k4777,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4777,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.446: INFO: Pod "webserver-deployment-795d758f88-gvrxb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-gvrxb webserver-deployment-795d758f88- deployment-6227 ca1d4050-ba7e-48b3-a692-a35c15a0bf00 70360 0 2021-11-13 01:00:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 16fcb8fc-49dd-4cbc-8c74-211c45878136 0xc001016b9f 0xc001016bb0}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16fcb8fc-49dd-4cbc-8c74-211c45878136\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9pz7w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9pz7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.446: INFO: Pod "webserver-deployment-795d758f88-wc6rg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wc6rg webserver-deployment-795d758f88- deployment-6227 38dfa200-75af-4630-9e11-d0573aad3e29 70345 0 2021-11-13 01:00:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 16fcb8fc-49dd-4cbc-8c74-211c45878136 0xc001016d1f 0xc001016d30}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16fcb8fc-49dd-4cbc-8c74-211c45878136\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p8m79,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p8m79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.447: INFO: Pod "webserver-deployment-847dcfb7fb-2n4sx" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2n4sx webserver-deployment-847dcfb7fb- deployment-6227 659d5ff5-267c-48b0-a66e-8062f34daef6 70482 0 2021-11-13 01:00:19 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 43d61e07-2545-4943-853a-36d182a60efc 0xc001016e9f 0xc001016eb0}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43d61e07-2545-4943-853a-36d182a60efc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8nmgc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8nmgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.447: INFO: Pod "webserver-deployment-847dcfb7fb-54xvz" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-54xvz webserver-deployment-847dcfb7fb- deployment-6227 e7ba8713-f640-43ba-ad09-7519a7c48018 70477 0 2021-11-13 01:00:19 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 43d61e07-2545-4943-853a-36d182a60efc 0xc001016fdf 0xc001016ff0}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43d61e07-2545-4943-853a-36d182a60efc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qhv6z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhv6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.448: INFO: Pod "webserver-deployment-847dcfb7fb-6fglb" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-6fglb webserver-deployment-847dcfb7fb- deployment-6227 ae7de90e-fa74-4230-95c0-b87a59af39a9 70221 0 2021-11-13 01:00:01 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.74" ], "mac": "46:68:f4:0e:1d:cc", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.74" ], "mac": "46:68:f4:0e:1d:cc", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 43d61e07-2545-4943-853a-36d182a60efc 0xc00101720f 0xc001017240}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43d61e07-2545-4943-853a-36d182a60efc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-13 01:00:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-13 01:00:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.74\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ckq4q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ckq4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.74,StartTime:2021-11-13 01:00:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-13 01:00:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://13b2066f5d1c24a00f72ab097ad36de06bf3eb9e2fb749a5f9dc96a9fea22535,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.74,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.448: INFO: Pod "webserver-deployment-847dcfb7fb-6fvht" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-6fvht webserver-deployment-847dcfb7fb- deployment-6227 581138e6-773a-42bd-8eb8-50178dd74171 70298 0 2021-11-13 01:00:01 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.72" ], "mac": "de:a8:91:12:df:5e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.72" ], "mac": "de:a8:91:12:df:5e", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 43d61e07-2545-4943-853a-36d182a60efc 0xc00101770f 0xc001017730}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43d61e07-2545-4943-853a-36d182a60efc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-13 01:00:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-13 01:00:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.72\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pkdlj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pkdlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.72,StartTime:2021-11-13 01:00:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-13 01:00:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://293518dcc49c812f1f5eb0f37fd07270513450ecd0c532e0a190a29ab6d05204,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.72,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.449: INFO: Pod "webserver-deployment-847dcfb7fb-dj86p" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-dj86p webserver-deployment-847dcfb7fb- deployment-6227 c704fe43-69ec-45e7-a668-455f07be2210 70301 0 2021-11-13 01:00:01 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.75" ], "mac": "66:5d:b0:e4:f4:71", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.75" ], "mac": "66:5d:b0:e4:f4:71", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 43d61e07-2545-4943-853a-36d182a60efc 0xc001017cdf 0xc001017cf0}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43d61e07-2545-4943-853a-36d182a60efc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-13 01:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-13 01:00:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.75\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fr75t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fr75t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.75,StartTime:2021-11-13 01:00:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-13 01:00:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://6075d5137ec99c75bbde1e47a3f9ede695e5d756e1c0ad9f9bf109ae718d6fc6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.75,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.449: INFO: Pod "webserver-deployment-847dcfb7fb-fpf28" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-fpf28 webserver-deployment-847dcfb7fb- deployment-6227 9a858195-7c3d-4582-9728-f04074b00ae0 70483 0 2021-11-13 01:00:19 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 43d61e07-2545-4943-853a-36d182a60efc 0xc001017edf 0xc001017ef0}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43d61e07-2545-4943-853a-36d182a60efc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ldxsj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldxsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.449: INFO: Pod "webserver-deployment-847dcfb7fb-jqzfk" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-jqzfk webserver-deployment-847dcfb7fb- deployment-6227 c80e23b0-878d-46c9-9e17-7a347cf3192d 70185 0 2021-11-13 01:00:01 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.115" ], "mac": "42:eb:f3:f9:84:6b", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.115" ], "mac": "42:eb:f3:f9:84:6b", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 43d61e07-2545-4943-853a-36d182a60efc 0xc00005a17f 0xc00005aee0}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43d61e07-2545-4943-853a-36d182a60efc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-13 01:00:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-13 01:00:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.115\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bhm9m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bhm9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.115,StartTime:2021-11-13 01:00:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-13 01:00:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://5cd3aaeab7eb82c7166a2580ccd30b530a68e9b99eafc2f0422e656966f24aa7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.450: INFO: Pod "webserver-deployment-847dcfb7fb-k8mvk" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-k8mvk webserver-deployment-847dcfb7fb- deployment-6227 02a577a3-41eb-431b-b98f-8e144f0128dc 70249 0 2021-11-13 01:00:01 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.73" ], "mac": "0a:60:36:ae:10:f8", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.73" ], "mac": "0a:60:36:ae:10:f8", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 43d61e07-2545-4943-853a-36d182a60efc 0xc000a9e6af 0xc000a9e740}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43d61e07-2545-4943-853a-36d182a60efc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-13 01:00:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-13 01:00:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.73\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rklhs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rklhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.73,StartTime:2021-11-13 01:00:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-13 01:00:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://71437bd8acc0e3e2f5ff407321af3506d9fa0e6d37ff096cdbc9ad6a4f7aa468,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.73,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.450: INFO: Pod "webserver-deployment-847dcfb7fb-s8c8t" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-s8c8t webserver-deployment-847dcfb7fb- deployment-6227 6ac72830-f9b9-4601-90ce-da31283b4213 70206 0 2021-11-13 01:00:01 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.114" ], "mac": "12:2b:9d:1b:de:16", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.114" ], "mac": "12:2b:9d:1b:de:16", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 43d61e07-2545-4943-853a-36d182a60efc 0xc000a9ef6f 0xc000a9ef90}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43d61e07-2545-4943-853a-36d182a60efc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-13 01:00:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-13 01:00:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.114\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zqxwk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zqxwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.114,StartTime:2021-11-13 01:00:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-13 01:00:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://765f1180bf81ed5868c2875bff67ea23f8179bf7f6b47f2fb9ff9b3e881d7978,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.114,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.451: INFO: Pod "webserver-deployment-847dcfb7fb-tb7pb" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-tb7pb webserver-deployment-847dcfb7fb- deployment-6227 9ee1c0aa-6063-4855-a6d5-ca2963d1e033 70218 0 2021-11-13 01:00:01 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.117" ], "mac": "72:67:39:98:0e:18", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.117" ], "mac": "72:67:39:98:0e:18", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 43d61e07-2545-4943-853a-36d182a60efc 0xc000a9f5df 0xc000a9f5f0}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43d61e07-2545-4943-853a-36d182a60efc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-13 01:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-13 01:00:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.117\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qpzlv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qpzlv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.117,StartTime:2021-11-13 01:00:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-13 01:00:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://6508c3800730b07680a825a42b0509cb367b1acd86ef1646c9e90d4339722d48,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.117,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:00:19.451: INFO: Pod "webserver-deployment-847dcfb7fb-z2c9g" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-z2c9g webserver-deployment-847dcfb7fb- deployment-6227 a484ecfb-a960-4844-8bbf-5c85780a48b5 70192 0 2021-11-13 01:00:01 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.112" ], "mac": "fe:31:21:4a:16:08", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.112" ], "mac": "fe:31:21:4a:16:08", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 43d61e07-2545-4943-853a-36d182a60efc 0xc000a9f8df 0xc000a9f920}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43d61e07-2545-4943-853a-36d182a60efc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-13 01:00:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-13 01:00:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.112\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rqkkq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqkkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.112,StartTime:2021-11-13 01:00:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-13 01:00:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://6bac591bf51e2862d629f6ba1851c2ca2e5499ec963725846be61ae27ab1d563,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.112,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:19.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6227" for this suite. • [SLOW TEST:18.122 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":5,"skipped":73,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:19.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Nov 13 01:00:19.510: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Nov 13 01:00:19.535: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:19.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-7908" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":6,"skipped":75,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:13.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 01:00:14.020: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39f5ec0a-78ed-48ac-b914-d8b9b29cf644" in namespace "downward-api-4695" to be "Succeeded or Failed" Nov 13 01:00:14.022: INFO: Pod "downwardapi-volume-39f5ec0a-78ed-48ac-b914-d8b9b29cf644": Phase="Pending", Reason="", readiness=false. Elapsed: 2.406634ms Nov 13 01:00:16.027: INFO: Pod "downwardapi-volume-39f5ec0a-78ed-48ac-b914-d8b9b29cf644": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007464529s Nov 13 01:00:18.032: INFO: Pod "downwardapi-volume-39f5ec0a-78ed-48ac-b914-d8b9b29cf644": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011718021s Nov 13 01:00:20.034: INFO: Pod "downwardapi-volume-39f5ec0a-78ed-48ac-b914-d8b9b29cf644": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01416197s Nov 13 01:00:22.037: INFO: Pod "downwardapi-volume-39f5ec0a-78ed-48ac-b914-d8b9b29cf644": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017397461s Nov 13 01:00:24.041: INFO: Pod "downwardapi-volume-39f5ec0a-78ed-48ac-b914-d8b9b29cf644": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.021381828s STEP: Saw pod success Nov 13 01:00:24.041: INFO: Pod "downwardapi-volume-39f5ec0a-78ed-48ac-b914-d8b9b29cf644" satisfied condition "Succeeded or Failed" Nov 13 01:00:24.045: INFO: Trying to get logs from node node2 pod downwardapi-volume-39f5ec0a-78ed-48ac-b914-d8b9b29cf644 container client-container: STEP: delete the pod Nov 13 01:00:24.112: INFO: Waiting for pod downwardapi-volume-39f5ec0a-78ed-48ac-b914-d8b9b29cf644 to disappear Nov 13 01:00:24.114: INFO: Pod downwardapi-volume-39f5ec0a-78ed-48ac-b914-d8b9b29cf644 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:24.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4695" for this suite. • [SLOW TEST:10.137 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":372,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:47.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:27.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1872" for this suite. • [SLOW TEST:40.042 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":12,"skipped":252,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:27.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 13 01:00:27.872: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Nov 13 01:00:27.875: INFO: starting watch STEP: patching STEP: updating Nov 13 01:00:27.885: INFO: waiting for watch events with expected annotations Nov 13 01:00:27.885: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:27.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-4658" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":13,"skipped":260,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:58:13.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2467 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2467 I1113 00:58:13.601696 36 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2467, replica count: 2 I1113 00:58:16.652933 36 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 00:58:19.653816 36 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 13 00:58:19.653: INFO: Creating new exec pod Nov 13 00:58:24.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 13 00:58:24.926: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 13 00:58:24.926: INFO: stdout: "externalname-service-szcjp" Nov 13 00:58:24.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.6.3 80' Nov 13 00:58:25.160: INFO: stderr: "+ nc -v -t -w 2 10.233.6.3 80\n+ echo hostName\nConnection to 10.233.6.3 80 port [tcp/http] succeeded!\n" Nov 13 00:58:25.160: INFO: stdout: "externalname-service-szcjp" Nov 13 00:58:25.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:25.388: INFO: rc: 1 Nov 13 00:58:25.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:26.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:26.636: INFO: rc: 1 Nov 13 00:58:26.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:27.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:28.292: INFO: rc: 1 Nov 13 00:58:28.292: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:28.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:28.814: INFO: rc: 1 Nov 13 00:58:28.814: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:29.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:29.704: INFO: rc: 1 Nov 13 00:58:29.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:30.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:30.712: INFO: rc: 1 Nov 13 00:58:30.713: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:31.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:31.781: INFO: rc: 1 Nov 13 00:58:31.781: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:32.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:32.873: INFO: rc: 1 Nov 13 00:58:32.873: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:33.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:33.836: INFO: rc: 1 Nov 13 00:58:33.836: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:34.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:34.844: INFO: rc: 1 Nov 13 00:58:34.844: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:35.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:35.830: INFO: rc: 1 Nov 13 00:58:35.830: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:36.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:37.026: INFO: rc: 1 Nov 13 00:58:37.026: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:37.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:37.702: INFO: rc: 1 Nov 13 00:58:37.702: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:38.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:38.638: INFO: rc: 1 Nov 13 00:58:38.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:39.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:39.630: INFO: rc: 1 Nov 13 00:58:39.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:40.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:40.664: INFO: rc: 1 Nov 13 00:58:40.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:41.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:41.729: INFO: rc: 1 Nov 13 00:58:41.729: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:42.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:42.707: INFO: rc: 1 Nov 13 00:58:42.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:43.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:43.905: INFO: rc: 1 Nov 13 00:58:43.906: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:44.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:44.641: INFO: rc: 1 Nov 13 00:58:44.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:45.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:46.235: INFO: rc: 1 Nov 13 00:58:46.235: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:46.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:46.911: INFO: rc: 1 Nov 13 00:58:46.911: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:47.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:47.732: INFO: rc: 1 Nov 13 00:58:47.732: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:48.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:48.656: INFO: rc: 1 Nov 13 00:58:48.656: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:49.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:49.717: INFO: rc: 1 Nov 13 00:58:49.717: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:50.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:50.674: INFO: rc: 1 Nov 13 00:58:50.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:51.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:51.996: INFO: rc: 1 Nov 13 00:58:51.996: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:52.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:52.769: INFO: rc: 1 Nov 13 00:58:52.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:53.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:53.696: INFO: rc: 1 Nov 13 00:58:53.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:54.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:54.842: INFO: rc: 1 Nov 13 00:58:54.842: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31745 nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:55.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:55.854: INFO: rc: 1 Nov 13 00:58:55.854: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:56.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:56.832: INFO: rc: 1 Nov 13 00:58:56.832: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:57.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:58.038: INFO: rc: 1 Nov 13 00:58:58.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:58.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:58.677: INFO: rc: 1 Nov 13 00:58:58.677: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:58:59.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:58:59.642: INFO: rc: 1 Nov 13 00:58:59.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:00.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:00.638: INFO: rc: 1 Nov 13 00:59:00.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:01.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:01.640: INFO: rc: 1 Nov 13 00:59:01.640: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:02.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:02.655: INFO: rc: 1 Nov 13 00:59:02.655: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:03.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:03.716: INFO: rc: 1 Nov 13 00:59:03.716: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:04.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:04.635: INFO: rc: 1 Nov 13 00:59:04.635: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:05.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:05.653: INFO: rc: 1 Nov 13 00:59:05.653: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:06.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:06.678: INFO: rc: 1 Nov 13 00:59:06.678: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:07.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:07.616: INFO: rc: 1 Nov 13 00:59:07.616: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:08.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:08.646: INFO: rc: 1 Nov 13 00:59:08.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:09.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:09.627: INFO: rc: 1 Nov 13 00:59:09.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:10.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:10.799: INFO: rc: 1 Nov 13 00:59:10.799: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:11.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:11.674: INFO: rc: 1 Nov 13 00:59:11.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:12.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:12.642: INFO: rc: 1 Nov 13 00:59:12.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:13.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:13.947: INFO: rc: 1 Nov 13 00:59:13.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:14.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:14.628: INFO: rc: 1 Nov 13 00:59:14.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:15.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:15.649: INFO: rc: 1 Nov 13 00:59:15.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:16.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:16.664: INFO: rc: 1 Nov 13 00:59:16.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:17.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:17.623: INFO: rc: 1 Nov 13 00:59:17.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:18.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:18.677: INFO: rc: 1 Nov 13 00:59:18.677: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:19.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:19.692: INFO: rc: 1 Nov 13 00:59:19.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:20.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:20.631: INFO: rc: 1 Nov 13 00:59:20.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:21.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:21.727: INFO: rc: 1 Nov 13 00:59:21.727: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:22.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:22.838: INFO: rc: 1 Nov 13 00:59:22.838: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:23.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:23.654: INFO: rc: 1 Nov 13 00:59:23.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:24.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:24.645: INFO: rc: 1 Nov 13 00:59:24.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:25.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:25.641: INFO: rc: 1 Nov 13 00:59:25.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:26.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:26.645: INFO: rc: 1 Nov 13 00:59:26.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:27.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:28.285: INFO: rc: 1 Nov 13 00:59:28.285: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:28.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:28.689: INFO: rc: 1 Nov 13 00:59:28.689: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:29.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:29.623: INFO: rc: 1 Nov 13 00:59:29.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31745 nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:30.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:30.668: INFO: rc: 1 Nov 13 00:59:30.668: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:31.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:31.660: INFO: rc: 1 Nov 13 00:59:31.660: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:32.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:32.628: INFO: rc: 1 Nov 13 00:59:32.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:33.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:33.643: INFO: rc: 1 Nov 13 00:59:33.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:34.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:34.642: INFO: rc: 1 Nov 13 00:59:34.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:35.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:35.651: INFO: rc: 1 Nov 13 00:59:35.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:36.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:36.645: INFO: rc: 1 Nov 13 00:59:36.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:37.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:37.637: INFO: rc: 1 Nov 13 00:59:37.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:38.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:38.654: INFO: rc: 1 Nov 13 00:59:38.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:39.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:39.816: INFO: rc: 1 Nov 13 00:59:39.816: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:40.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:40.738: INFO: rc: 1 Nov 13 00:59:40.738: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:41.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:41.629: INFO: rc: 1 Nov 13 00:59:41.630: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:42.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:43.203: INFO: rc: 1 Nov 13 00:59:43.203: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:43.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:43.620: INFO: rc: 1 Nov 13 00:59:43.620: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:44.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:44.665: INFO: rc: 1 Nov 13 00:59:44.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:45.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:45.644: INFO: rc: 1 Nov 13 00:59:45.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:46.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:46.636: INFO: rc: 1 Nov 13 00:59:46.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:47.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:47.903: INFO: rc: 1 Nov 13 00:59:47.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:48.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:49.506: INFO: rc: 1 Nov 13 00:59:49.506: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:50.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:51.004: INFO: rc: 1 Nov 13 00:59:51.004: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:51.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:52.018: INFO: rc: 1 Nov 13 00:59:52.018: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:52.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:53.492: INFO: rc: 1 Nov 13 00:59:53.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:54.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:54.848: INFO: rc: 1 Nov 13 00:59:54.848: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:55.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:56.800: INFO: rc: 1 Nov 13 00:59:56.800: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:57.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 00:59:58.715: INFO: rc: 1 Nov 13 00:59:58.715: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 00:59:59.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:00.276: INFO: rc: 1 Nov 13 01:00:00.276: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:00.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:01.661: INFO: rc: 1 Nov 13 01:00:01.661: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:02.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:03.132: INFO: rc: 1 Nov 13 01:00:03.132: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:03.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:03.991: INFO: rc: 1 Nov 13 01:00:03.991: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:04.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:04.688: INFO: rc: 1 Nov 13 01:00:04.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:05.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:05.862: INFO: rc: 1 Nov 13 01:00:05.862: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:06.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:06.941: INFO: rc: 1 Nov 13 01:00:06.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:07.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:07.740: INFO: rc: 1 Nov 13 01:00:07.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:08.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:08.898: INFO: rc: 1 Nov 13 01:00:08.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:09.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:09.720: INFO: rc: 1 Nov 13 01:00:09.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:10.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:10.803: INFO: rc: 1 Nov 13 01:00:10.803: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:11.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:11.642: INFO: rc: 1 Nov 13 01:00:11.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31745 nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:12.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:12.644: INFO: rc: 1 Nov 13 01:00:12.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:13.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:13.657: INFO: rc: 1 Nov 13 01:00:13.657: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:14.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:14.791: INFO: rc: 1 Nov 13 01:00:14.791: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:15.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:15.747: INFO: rc: 1 Nov 13 01:00:15.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:16.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:16.650: INFO: rc: 1 Nov 13 01:00:16.650: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:17.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:17.812: INFO: rc: 1 Nov 13 01:00:17.812: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:18.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:18.788: INFO: rc: 1 Nov 13 01:00:18.788: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31745 nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:19.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:20.144: INFO: rc: 1 Nov 13 01:00:20.144: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:20.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:21.098: INFO: rc: 1 Nov 13 01:00:21.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:21.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:21.824: INFO: rc: 1 Nov 13 01:00:21.824: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:22.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:23.227: INFO: rc: 1 Nov 13 01:00:23.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:23.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:23.705: INFO: rc: 1 Nov 13 01:00:23.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:24.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:24.708: INFO: rc: 1 Nov 13 01:00:24.708: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31745 nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:25.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:25.868: INFO: rc: 1 Nov 13 01:00:25.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:25.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745' Nov 13 01:00:26.194: INFO: rc: 1 Nov 13 01:00:26.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2467 exec execpodwxlzs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31745: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31745 + echo hostName nc: connect to 10.10.190.207 port 31745 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:00:26.195: FAIL: Unexpected error: <*errors.errorString | 0xc004dd8be0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31745 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31745 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000183680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000183680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000183680, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Nov 13 01:00:26.196: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-2467". STEP: Found 17 events. Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:13 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-6nfb8 Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:13 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-szcjp Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:13 +0000 UTC - event for externalname-service-6nfb8: {default-scheduler } Scheduled: Successfully assigned services-2467/externalname-service-6nfb8 to node1 Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:13 +0000 UTC - event for externalname-service-szcjp: {default-scheduler } Scheduled: Successfully assigned services-2467/externalname-service-szcjp to node1 Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:16 +0000 UTC - event for externalname-service-6nfb8: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:16 +0000 UTC - event for externalname-service-6nfb8: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 279.18618ms Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:16 +0000 UTC - event for externalname-service-szcjp: {kubelet node1} Created: Created container externalname-service Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:16 +0000 UTC - event for externalname-service-szcjp: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:16 +0000 UTC - event for externalname-service-szcjp: {kubelet node1} Started: Started container externalname-service Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:16 +0000 UTC - event for externalname-service-szcjp: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 288.587861ms Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:17 +0000 UTC - event for externalname-service-6nfb8: {kubelet node1} Started: Started container externalname-service Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:17 +0000 UTC - event for externalname-service-6nfb8: {kubelet node1} Created: Created container externalname-service Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:19 +0000 UTC - event for execpodwxlzs: {default-scheduler } Scheduled: Successfully assigned services-2467/execpodwxlzs to node2 Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:21 +0000 UTC - event for execpodwxlzs: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 277.57969ms Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:21 +0000 UTC - event for execpodwxlzs: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:22 +0000 UTC - event for execpodwxlzs: {kubelet node2} Started: Started container agnhost-container Nov 13 01:00:26.222: INFO: At 2021-11-13 00:58:22 +0000 UTC - event for execpodwxlzs: {kubelet node2} Created: Created container agnhost-container Nov 13 01:00:26.224: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:00:26.224: INFO: execpodwxlzs node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:58:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:58:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:58:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:58:19 +0000 UTC }] Nov 13 01:00:26.224: INFO: externalname-service-6nfb8 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:58:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:58:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:58:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:58:13 +0000 UTC }] Nov 13 01:00:26.225: INFO: externalname-service-szcjp node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:58:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:58:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:58:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 00:58:13 +0000 UTC }] Nov 13 01:00:26.225: INFO: Nov 13 01:00:26.229: INFO: Logging node info for node master1 Nov 13 01:00:26.231: INFO: Node Info: &Node{ObjectMeta:{master1 56d66c54-e52b-494a-a758-e4b658c4b245 70464 0 2021-11-12 21:05:50 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:19 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:19 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:19 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:00:19 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:00:26.232: INFO: Logging kubelet events for node master1 Nov 13 01:00:26.233: INFO: Logging pods the kubelet thinks is on node master1 Nov 13 01:00:26.242: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.242: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 01:00:26.242: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.242: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:00:26.242: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:26.242: INFO: Container docker-registry ready: true, restart count 0 Nov 13 01:00:26.242: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:26.242: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.242: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 01:00:26.242: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:00:26.242: INFO: Init container install-cni ready: true, restart count 0 Nov 13 01:00:26.242: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 01:00:26.242: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.242: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:00:26.242: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.242: INFO: Container coredns ready: true, restart count 2 Nov 13 01:00:26.242: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:26.242: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:00:26.242: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:00:26.242: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.242: INFO: Container kube-scheduler ready: true, restart count 0 W1113 01:00:26.263440 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:00:26.332: INFO: Latency metrics for node master1 Nov 13 01:00:26.332: INFO: Logging node info for node master2 Nov 13 01:00:26.336: INFO: Node Info: &Node{ObjectMeta:{master2 9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 70307 0 2021-11-12 21:06:20 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:16 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:16 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:16 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:00:16 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:00:26.336: INFO: Logging kubelet events for node master2 Nov 13 01:00:26.339: INFO: Logging pods the kubelet thinks is on node master2 Nov 13 01:00:26.348: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.348: INFO: Container nfd-controller ready: true, restart count 0 Nov 13 01:00:26.348: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.348: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 01:00:26.348: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.348: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 01:00:26.348: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.348: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 01:00:26.348: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:00:26.348: INFO: Init container install-cni ready: true, restart count 0 Nov 13 01:00:26.348: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 01:00:26.348: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.348: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:00:26.348: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.348: INFO: Container coredns ready: true, restart count 1 Nov 13 01:00:26.348: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:26.348: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:00:26.348: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:00:26.348: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.348: INFO: Container kube-controller-manager ready: true, restart count 2 W1113 01:00:26.362148 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:00:26.429: INFO: Latency metrics for node master2 Nov 13 01:00:26.429: INFO: Logging node info for node master3 Nov 13 01:00:26.433: INFO: Node Info: &Node{ObjectMeta:{master3 fce0cd54-e4d8-4ce1-b720-522aad2d7989 70605 0 2021-11-12 21:06:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:20 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:20 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:20 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:00:20 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:00:26.434: INFO: Logging kubelet events for node master3 Nov 13 01:00:26.436: INFO: Logging pods the kubelet thinks is on node master3 Nov 13 01:00:26.444: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.444: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 01:00:26.444: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:26.444: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:00:26.444: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:00:26.444: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.444: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 01:00:26.444: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.444: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 13 01:00:26.444: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.444: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:00:26.444: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.444: INFO: Container autoscaler ready: true, restart count 1 Nov 13 01:00:26.444: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.444: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:00:26.444: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:00:26.444: INFO: Init container install-cni ready: true, restart count 0 Nov 13 01:00:26.444: INFO: Container kube-flannel ready: true, restart count 1 W1113 01:00:26.456730 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:00:26.519: INFO: Latency metrics for node master3 Nov 13 01:00:26.519: INFO: Logging node info for node node1 Nov 13 01:00:26.522: INFO: Node Info: &Node{ObjectMeta:{node1 6ceb907c-9809-4d18-88c6-b1e10ba80f97 70623 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:23 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:23 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:23 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:00:23 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:00:26.523: INFO: Logging kubelet events for node node1 Nov 13 01:00:26.525: INFO: Logging pods the kubelet thinks is on node node1 Nov 13 01:00:26.551: INFO: simpletest.rc-bhfm2 started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:26.551: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Init container install-cni ready: true, restart count 2 Nov 13 01:00:26.551: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 01:00:26.551: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:00:26.551: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:26.551: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:00:26.551: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:00:26.551: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:26.551: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:00:26.551: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:00:26.551: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:00:26.551: INFO: test-rolling-update-controller-27ttc started at 2021-11-13 01:00:10 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container httpd ready: true, restart count 0 Nov 13 01:00:26.551: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:26.551: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:00:26.551: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 01:00:26.551: INFO: simpletest.rc-xz246 started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:26.551: INFO: test-rollover-deployment-98c5f4599-sx84m started at (0+0 container statuses recorded) Nov 13 01:00:26.551: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 01:00:26.551: INFO: Container collectd ready: true, restart count 0 Nov 13 01:00:26.551: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:00:26.551: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:00:26.551: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:00:26.551: INFO: simpletest.rc-6zj7b started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:26.551: INFO: externalname-service-6nfb8 started at 2021-11-13 00:58:14 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container externalname-service ready: true, restart count 0 Nov 13 01:00:26.551: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 01:00:26.551: INFO: simpletest.rc-wx56l started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:26.551: INFO: simpletest.rc-vfjb4 started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:26.551: INFO: concurrent-27279420-g8d8r started at 2021-11-13 01:00:00 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container c ready: true, restart count 0 Nov 13 01:00:26.551: INFO: test-rollover-controller-2wjrq started at 2021-11-13 01:00:05 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container httpd ready: true, restart count 0 Nov 13 01:00:26.551: INFO: fail-once-local-cv8t5 started at 2021-11-13 01:00:15 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container c ready: false, restart count 0 Nov 13 01:00:26.551: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded) Nov 13 01:00:26.551: INFO: Container discover ready: false, restart count 0 Nov 13 01:00:26.551: INFO: Container init ready: false, restart count 0 Nov 13 01:00:26.551: INFO: Container install ready: false, restart count 0 Nov 13 01:00:26.551: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 01:00:26.551: INFO: simpletest-rc-to-be-deleted-jlpt2 started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:26.551: INFO: test-rolling-update-deployment-585b757574-wg24c started at 2021-11-13 01:00:17 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container agnhost ready: false, restart count 0 Nov 13 01:00:26.551: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:00:26.551: INFO: externalname-service-szcjp started at 2021-11-13 00:58:14 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:26.551: INFO: Container externalname-service ready: true, restart count 0 Nov 13 01:00:26.551: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded) Nov 13 01:00:26.551: INFO: Container config-reloader ready: true, restart count 0 Nov 13 01:00:26.551: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 01:00:26.551: INFO: Container grafana ready: true, restart count 0 Nov 13 01:00:26.551: INFO: Container prometheus ready: true, restart count 1 W1113 01:00:26.564674 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:00:27.044: INFO: Latency metrics for node node1 Nov 13 01:00:27.044: INFO: Logging node info for node node2 Nov 13 01:00:27.049: INFO: Node Info: &Node{ObjectMeta:{node2 652722dd-12b1-4529-ba4d-a00c590e4a68 70412 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-12 21:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:18 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:18 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:00:18 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:00:18 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:00:27.049: INFO: Logging kubelet events for node node2 Nov 13 01:00:27.052: INFO: Logging pods the kubelet thinks is on node node2 Nov 13 01:00:27.069: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Init container install-cni ready: true, restart count 2 Nov 13 01:00:27.069: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 01:00:27.069: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:00:27.069: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 01:00:27.069: INFO: simpletest-rc-to-be-deleted-45xcv started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:27.069: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:00:27.069: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 01:00:27.069: INFO: Container collectd ready: true, restart count 0 Nov 13 01:00:27.069: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:00:27.069: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:00:27.069: INFO: fail-once-local-bd4gs started at 2021-11-13 00:59:47 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Container c ready: false, restart count 1 Nov 13 01:00:27.069: INFO: simpletest.rc-fn44r started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:27.069: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:00:27.069: INFO: simpletest.rc-fv568 started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:27.069: INFO: execpodwxlzs started at 2021-11-13 00:58:19 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 01:00:27.069: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:27.069: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:00:27.069: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:00:27.069: INFO: simpletest-rc-to-be-deleted-6fjqv started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:27.069: INFO: fail-once-local-kmzl5 started at 2021-11-13 00:59:47 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Container c ready: false, restart count 1 Nov 13 01:00:27.069: INFO: simpletest.rc-q8w5r started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:27.069: INFO: simpletest-rc-to-be-deleted-c99mp started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:27.069: INFO: sample-webhook-deployment-78988fc6cd-l2pwm started at 2021-11-13 01:00:17 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.069: INFO: Container sample-webhook ready: false, restart count 0 Nov 13 01:00:27.069: INFO: downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272 started at (0+0 container statuses recorded) Nov 13 01:00:27.069: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded) Nov 13 01:00:27.069: INFO: Container discover ready: false, restart count 0 Nov 13 01:00:27.069: INFO: Container init ready: false, restart count 0 Nov 13 01:00:27.069: INFO: Container install ready: false, restart count 0 Nov 13 01:00:27.070: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:00:27.070: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:00:27.070: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:00:27.070: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.070: INFO: Container tas-extender ready: true, restart count 0 Nov 13 01:00:27.070: INFO: fail-once-local-lw9rd started at 2021-11-13 01:00:13 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.070: INFO: Container c ready: false, restart count 1 Nov 13 01:00:27.070: INFO: simpletest-rc-to-be-deleted-86xml started at 2021-11-13 00:59:46 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.070: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:27.070: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.070: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 01:00:27.070: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.070: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:00:27.070: INFO: simpletest.rc-4j92t started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.070: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:27.070: INFO: simpletest.rc-bz29t started at 2021-11-13 00:59:51 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.070: INFO: Container nginx ready: true, restart count 0 Nov 13 01:00:27.070: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:00:27.070: INFO: Container nginx-proxy ready: true, restart count 2 W1113 01:00:27.093719 36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:00:28.617: INFO: Latency metrics for node node2 Nov 13 01:00:28.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2467" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [135.064 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:00:26.195: Unexpected error: <*errors.errorString | 0xc004dd8be0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31745 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31745 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":9,"skipped":151,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:10.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:00:10.327: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Nov 13 01:00:10.332: INFO: Pod name sample-pod: Found 0 pods out of 1 Nov 13 01:00:15.339: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 13 01:00:17.344: INFO: Creating deployment "test-rolling-update-deployment" Nov 13 01:00:17.347: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Nov 13 01:00:17.352: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Nov 13 01:00:19.361: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Nov 13 01:00:19.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:21.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:23.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:25.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:27.367: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:29.367: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Nov 13 01:00:29.376: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4631 ff0f61b7-affd-4c02-ae3f-860a047ab4c0 70888 1 2021-11-13 01:00:17 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-11-13 01:00:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-13 01:00:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003390e38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-11-13 01:00:17 +0000 UTC,LastTransitionTime:2021-11-13 01:00:17 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2021-11-13 01:00:27 +0000 UTC,LastTransitionTime:2021-11-13 01:00:17 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Nov 13 01:00:29.381: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-4631 2e6fcd89-6732-431a-ae96-728802e52d3e 70879 1 2021-11-13 01:00:17 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment ff0f61b7-affd-4c02-ae3f-860a047ab4c0 0xc0033912e7 0xc0033912e8}] [] [{kube-controller-manager Update apps/v1 2021-11-13 01:00:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff0f61b7-affd-4c02-ae3f-860a047ab4c0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003391378 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Nov 13 01:00:29.381: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Nov 13 01:00:29.381: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4631 d3b0b6e7-ce0d-48d6-a906-c33e9d99c3a6 70887 2 2021-11-13 01:00:10 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment ff0f61b7-affd-4c02-ae3f-860a047ab4c0 0xc0033911d7 0xc0033911d8}] [] [{e2e.test Update apps/v1 2021-11-13 01:00:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-13 01:00:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff0f61b7-affd-4c02-ae3f-860a047ab4c0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003391278 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 13 01:00:29.384: INFO: Pod "test-rolling-update-deployment-585b757574-wg24c" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-wg24c test-rolling-update-deployment-585b757574- deployment-4631 c304d6c0-a835-45dd-84d0-b636f71189d7 70878 0 2021-11-13 01:00:17 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.121" ], "mac": "aa:77:0a:0e:10:26", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.121" ], "mac": "aa:77:0a:0e:10:26", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 2e6fcd89-6732-431a-ae96-728802e52d3e 0xc00339178f 0xc0033917a0}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e6fcd89-6732-431a-ae96-728802e52d3e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-13 01:00:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-13 01:00:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.121\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lhwfl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lhwfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.121,StartTime:2021-11-13 01:00:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-13 01:00:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://379d54cbbd6d9763e7df0521f8dd8cee327279c17d39870de868152c30f7aa54,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.121,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:29.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4631" for this suite. • [SLOW TEST:19.084 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":15,"skipped":163,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:24.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:35.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3056" for this suite. • [SLOW TEST:11.068 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":20,"skipped":380,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:19.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 01:00:19.674: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272" in namespace "downward-api-9998" to be "Succeeded or Failed" Nov 13 01:00:19.677: INFO: Pod "downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272": Phase="Pending", Reason="", readiness=false. Elapsed: 2.684469ms Nov 13 01:00:21.681: INFO: Pod "downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007150024s Nov 13 01:00:23.687: INFO: Pod "downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012834784s Nov 13 01:00:25.692: INFO: Pod "downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017823307s Nov 13 01:00:27.696: INFO: Pod "downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021466289s Nov 13 01:00:29.701: INFO: Pod "downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026770238s Nov 13 01:00:31.704: INFO: Pod "downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029923955s Nov 13 01:00:33.711: INFO: Pod "downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272": Phase="Pending", Reason="", readiness=false. Elapsed: 14.036691217s Nov 13 01:00:35.714: INFO: Pod "downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.040201872s STEP: Saw pod success Nov 13 01:00:35.714: INFO: Pod "downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272" satisfied condition "Succeeded or Failed" Nov 13 01:00:35.717: INFO: Trying to get logs from node node2 pod downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272 container client-container: STEP: delete the pod Nov 13 01:00:35.828: INFO: Waiting for pod downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272 to disappear Nov 13 01:00:35.830: INFO: Pod downwardapi-volume-c3eecae7-c990-48f3-8aa1-132e10e79272 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:35.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9998" for this suite. • [SLOW TEST:16.261 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":80,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:17.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 01:00:17.881: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 01:00:19.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:21.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:23.895: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:25.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:27.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:29.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:31.894: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:33.895: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362017, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 01:00:36.900: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:36.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4085" for this suite. STEP: Destroying namespace "webhook-4085-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.583 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":24,"skipped":502,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:29.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium Nov 13 01:00:29.448: INFO: Waiting up to 5m0s for pod "pod-c0f5f282-cd8b-4b36-9a48-8fd6613f9aa6" in namespace "emptydir-1924" to be "Succeeded or Failed" Nov 13 01:00:29.451: INFO: Pod "pod-c0f5f282-cd8b-4b36-9a48-8fd6613f9aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.514791ms Nov 13 01:00:31.454: INFO: Pod "pod-c0f5f282-cd8b-4b36-9a48-8fd6613f9aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005993991s Nov 13 01:00:33.463: INFO: Pod "pod-c0f5f282-cd8b-4b36-9a48-8fd6613f9aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014768849s Nov 13 01:00:35.468: INFO: Pod "pod-c0f5f282-cd8b-4b36-9a48-8fd6613f9aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019651831s Nov 13 01:00:37.472: INFO: Pod "pod-c0f5f282-cd8b-4b36-9a48-8fd6613f9aa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.023591784s STEP: Saw pod success Nov 13 01:00:37.472: INFO: Pod "pod-c0f5f282-cd8b-4b36-9a48-8fd6613f9aa6" satisfied condition "Succeeded or Failed" Nov 13 01:00:37.474: INFO: Trying to get logs from node node2 pod pod-c0f5f282-cd8b-4b36-9a48-8fd6613f9aa6 container test-container: STEP: delete the pod Nov 13 01:00:37.615: INFO: Waiting for pod pod-c0f5f282-cd8b-4b36-9a48-8fd6613f9aa6 to disappear Nov 13 01:00:37.617: INFO: Pod pod-c0f5f282-cd8b-4b36-9a48-8fd6613f9aa6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:37.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1924" for this suite. • [SLOW TEST:8.207 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":173,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:28.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running Nov 13 01:00:30.724: INFO: running pods: 0 < 1 Nov 13 01:00:32.731: INFO: running pods: 0 < 1 Nov 13 01:00:34.729: INFO: running pods: 0 < 1 Nov 13 01:00:36.727: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:38.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6946" for this suite. • [SLOW TEST:10.092 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":10,"skipped":168,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:05.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:00:05.400: INFO: Pod name rollover-pod: Found 0 pods out of 1 Nov 13 01:00:10.403: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 13 01:00:16.411: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Nov 13 01:00:18.413: INFO: Creating deployment "test-rollover-deployment" Nov 13 01:00:18.419: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Nov 13 01:00:20.424: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Nov 13 01:00:20.430: INFO: Ensure that both replica sets have 1 created replica Nov 13 01:00:20.437: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Nov 13 01:00:20.446: INFO: Updating deployment test-rollover-deployment Nov 13 01:00:20.446: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Nov 13 01:00:22.451: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Nov 13 01:00:22.456: INFO: Make sure deployment "test-rollover-deployment" is complete Nov 13 01:00:22.461: INFO: all replica sets need to contain the pod-template-hash label Nov 13 01:00:22.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362020, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:24.467: INFO: all replica sets need to contain the pod-template-hash label Nov 13 01:00:24.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362020, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:26.469: INFO: all replica sets need to contain the pod-template-hash label Nov 13 01:00:26.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362020, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:28.468: INFO: all replica sets need to contain the pod-template-hash label Nov 13 01:00:28.468: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362020, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:30.467: INFO: all replica sets need to contain the pod-template-hash label Nov 13 01:00:30.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362028, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:32.469: INFO: all replica sets need to contain the pod-template-hash label Nov 13 01:00:32.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362028, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:34.470: INFO: all replica sets need to contain the pod-template-hash label Nov 13 01:00:34.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362028, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:36.469: INFO: all replica sets need to contain the pod-template-hash label Nov 13 01:00:36.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362028, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:38.467: INFO: all replica sets need to contain the pod-template-hash label Nov 13 01:00:38.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362028, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362018, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:40.467: INFO: Nov 13 01:00:40.467: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Nov 13 01:00:40.475: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-3851 c23ad3fe-f143-4d6b-94ee-38b2e129cb94 71250 2 2021-11-13 01:00:18 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-11-13 01:00:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-13 01:00:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003826088 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-11-13 01:00:18 +0000 UTC,LastTransitionTime:2021-11-13 01:00:18 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2021-11-13 01:00:39 +0000 UTC,LastTransitionTime:2021-11-13 01:00:18 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Nov 13 01:00:40.478: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-3851 6af6c703-dee4-4602-9e7e-cd8f8d0b7932 71238 2 2021-11-13 01:00:20 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment c23ad3fe-f143-4d6b-94ee-38b2e129cb94 0xc003826630 0xc003826631}] [] [{kube-controller-manager Update apps/v1 2021-11-13 01:00:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c23ad3fe-f143-4d6b-94ee-38b2e129cb94\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0038266a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Nov 13 01:00:40.478: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Nov 13 01:00:40.478: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3851 a801465f-2433-401e-939b-d78f58ca98bc 71248 2 2021-11-13 01:00:05 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment c23ad3fe-f143-4d6b-94ee-38b2e129cb94 0xc003826427 0xc003826428}] [] [{e2e.test Update apps/v1 2021-11-13 01:00:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-13 01:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c23ad3fe-f143-4d6b-94ee-38b2e129cb94\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0038264c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 13 01:00:40.479: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-3851 d3bf3c82-3b55-483b-98b6-c08070c4c477 70606 2 2021-11-13 01:00:18 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment c23ad3fe-f143-4d6b-94ee-38b2e129cb94 0xc003826537 0xc003826538}] [] [{kube-controller-manager Update apps/v1 2021-11-13 01:00:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c23ad3fe-f143-4d6b-94ee-38b2e129cb94\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0038265c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 13 01:00:40.481: INFO: Pod "test-rollover-deployment-98c5f4599-sx84m" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-sx84m test-rollover-deployment-98c5f4599- deployment-3851 7a63a1e6-1169-4b48-b222-809b3c2d88c0 70941 0 2021-11-13 01:00:20 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.123" ], "mac": "d6:47:43:d0:1d:63", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.123" ], "mac": "d6:47:43:d0:1d:63", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 6af6c703-dee4-4602-9e7e-cd8f8d0b7932 0xc003826b9f 0xc003826bb0}] [] [{kube-controller-manager Update v1 2021-11-13 01:00:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6af6c703-dee4-4602-9e7e-cd8f8d0b7932\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-13 01:00:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-13 01:00:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.123\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2kcrn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2kcrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:00:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.123,StartTime:2021-11-13 01:00:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-13 01:00:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://d92008f1dd8802cfebfe3ec3ad4882ac0851778d9bbb69ce16e16f55d9a3263a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.123,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:40.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3851" for this suite. • [SLOW TEST:35.121 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":28,"skipped":326,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:37.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-b53248ef-eb42-4ed2-b830-b99a3126a6d5 STEP: Creating a pod to test consume secrets Nov 13 01:00:37.684: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0963121f-96af-4eac-a840-2a1994ee119c" in namespace "projected-2988" to be "Succeeded or Failed" Nov 13 01:00:37.686: INFO: Pod "pod-projected-secrets-0963121f-96af-4eac-a840-2a1994ee119c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.358263ms Nov 13 01:00:39.689: INFO: Pod "pod-projected-secrets-0963121f-96af-4eac-a840-2a1994ee119c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005371909s Nov 13 01:00:41.693: INFO: Pod "pod-projected-secrets-0963121f-96af-4eac-a840-2a1994ee119c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009136456s Nov 13 01:00:43.697: INFO: Pod "pod-projected-secrets-0963121f-96af-4eac-a840-2a1994ee119c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013116639s STEP: Saw pod success Nov 13 01:00:43.697: INFO: Pod "pod-projected-secrets-0963121f-96af-4eac-a840-2a1994ee119c" satisfied condition "Succeeded or Failed" Nov 13 01:00:43.699: INFO: Trying to get logs from node node2 pod pod-projected-secrets-0963121f-96af-4eac-a840-2a1994ee119c container secret-volume-test: STEP: delete the pod Nov 13 01:00:43.721: INFO: Waiting for pod pod-projected-secrets-0963121f-96af-4eac-a840-2a1994ee119c to disappear Nov 13 01:00:43.723: INFO: Pod pod-projected-secrets-0963121f-96af-4eac-a840-2a1994ee119c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:43.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2988" for this suite. • [SLOW TEST:6.083 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":183,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:35.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod Nov 13 01:00:35.893: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:00:37.896: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:00:39.898: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:00:41.897: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Nov 13 01:00:41.911: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:00:43.915: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:00:45.916: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Nov 13 01:00:45.920: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3287 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 01:00:45.920: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:00:46.133: INFO: Exec stderr: "" Nov 13 01:00:46.133: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3287 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 01:00:46.133: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:00:46.311: INFO: Exec stderr: "" Nov 13 01:00:46.312: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3287 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 01:00:46.312: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:00:46.433: INFO: Exec stderr: "" Nov 13 01:00:46.433: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3287 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 01:00:46.433: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:00:46.518: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Nov 13 01:00:46.518: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3287 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 01:00:46.518: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:00:46.601: INFO: Exec stderr: "" Nov 13 01:00:46.602: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3287 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 01:00:46.602: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:00:46.697: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Nov 13 01:00:46.697: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3287 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 01:00:46.697: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:00:46.781: INFO: Exec stderr: "" Nov 13 01:00:46.781: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3287 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 01:00:46.781: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:00:46.861: INFO: Exec stderr: "" Nov 13 01:00:46.861: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3287 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 01:00:46.862: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:00:46.942: INFO: Exec stderr: "" Nov 13 01:00:46.942: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3287 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 01:00:46.942: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:00:47.039: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:47.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3287" for this suite. • [SLOW TEST:11.192 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":87,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:38.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 01:00:39.123: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 01:00:41.134: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362039, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362039, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362039, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362039, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:43.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362039, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362039, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362039, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362039, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 01:00:46.147: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Nov 13 01:00:50.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-6432 attach --namespace=webhook-6432 to-be-attached-pod -i -c=container1' Nov 13 01:00:50.358: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:50.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6432" for this suite. STEP: Destroying namespace "webhook-6432-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.615 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":11,"skipped":173,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:40.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-cde40d0a-18b9-4311-93a0-1cc8210e37cf STEP: Creating configMap with name cm-test-opt-upd-a2976de6-ece1-4a2e-a041-7fdb135927ab STEP: Creating the pod Nov 13 01:00:40.569: INFO: The status of Pod pod-projected-configmaps-5d7d285c-8628-46cd-ba8b-26de56302856 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:00:42.573: INFO: The status of Pod pod-projected-configmaps-5d7d285c-8628-46cd-ba8b-26de56302856 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:00:44.573: INFO: The status of Pod pod-projected-configmaps-5d7d285c-8628-46cd-ba8b-26de56302856 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:00:46.573: INFO: The status of Pod pod-projected-configmaps-5d7d285c-8628-46cd-ba8b-26de56302856 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-cde40d0a-18b9-4311-93a0-1cc8210e37cf STEP: Updating configmap cm-test-opt-upd-a2976de6-ece1-4a2e-a041-7fdb135927ab STEP: Creating configMap with name cm-test-opt-create-8f5c5218-c037-43f5-90fc-6fa302f4804b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:50.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6719" for this suite. • [SLOW TEST:10.130 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:47.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:51.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4169" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":97,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:43.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Nov 13 01:00:44.252: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 01:00:44.263: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 01:00:46.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362044, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362044, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362044, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362044, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:48.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362044, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362044, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362044, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362044, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 01:00:51.285: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:51.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3149" for this suite. STEP: Destroying namespace "webhook-3149-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.682 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":18,"skipped":220,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:51.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 01:00:51.232: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66e118d3-09c1-41c0-8a4b-4202ebab8c67" in namespace "downward-api-3215" to be "Succeeded or Failed" Nov 13 01:00:51.234: INFO: Pod "downwardapi-volume-66e118d3-09c1-41c0-8a4b-4202ebab8c67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184502ms Nov 13 01:00:53.239: INFO: Pod "downwardapi-volume-66e118d3-09c1-41c0-8a4b-4202ebab8c67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006634224s Nov 13 01:00:55.242: INFO: Pod "downwardapi-volume-66e118d3-09c1-41c0-8a4b-4202ebab8c67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010123948s STEP: Saw pod success Nov 13 01:00:55.242: INFO: Pod "downwardapi-volume-66e118d3-09c1-41c0-8a4b-4202ebab8c67" satisfied condition "Succeeded or Failed" Nov 13 01:00:55.244: INFO: Trying to get logs from node node2 pod downwardapi-volume-66e118d3-09c1-41c0-8a4b-4202ebab8c67 container client-container: STEP: delete the pod Nov 13 01:00:55.257: INFO: Waiting for pod downwardapi-volume-66e118d3-09c1-41c0-8a4b-4202ebab8c67 to disappear Nov 13 01:00:55.259: INFO: Pod downwardapi-volume-66e118d3-09c1-41c0-8a4b-4202ebab8c67 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:55.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3215" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:51.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Nov 13 01:00:51.539: INFO: Waiting up to 5m0s for pod "pod-45a6da00-aea1-4de6-9759-8d03a2b1bb80" in namespace "emptydir-1377" to be "Succeeded or Failed" Nov 13 01:00:51.541: INFO: Pod "pod-45a6da00-aea1-4de6-9759-8d03a2b1bb80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381616ms Nov 13 01:00:53.545: INFO: Pod "pod-45a6da00-aea1-4de6-9759-8d03a2b1bb80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005799371s Nov 13 01:00:55.547: INFO: Pod "pod-45a6da00-aea1-4de6-9759-8d03a2b1bb80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008355267s STEP: Saw pod success Nov 13 01:00:55.547: INFO: Pod "pod-45a6da00-aea1-4de6-9759-8d03a2b1bb80" satisfied condition "Succeeded or Failed" Nov 13 01:00:55.550: INFO: Trying to get logs from node node1 pod pod-45a6da00-aea1-4de6-9759-8d03a2b1bb80 container test-container: STEP: delete the pod Nov 13 01:00:55.562: INFO: Waiting for pod pod-45a6da00-aea1-4de6-9759-8d03a2b1bb80 to disappear Nov 13 01:00:55.564: INFO: Pod pod-45a6da00-aea1-4de6-9759-8d03a2b1bb80 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:55.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1377" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":226,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:50.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created Nov 13 01:00:50.849: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:00:52.853: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:00:54.854: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:55.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6531" for this suite. • [SLOW TEST:5.060 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":30,"skipped":423,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:50.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 01:00:50.476: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91a2b954-a644-444d-9de8-3f8b0bf4ead6" in namespace "downward-api-395" to be "Succeeded or Failed" Nov 13 01:00:50.479: INFO: Pod "downwardapi-volume-91a2b954-a644-444d-9de8-3f8b0bf4ead6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088294ms Nov 13 01:00:52.481: INFO: Pod "downwardapi-volume-91a2b954-a644-444d-9de8-3f8b0bf4ead6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004627567s Nov 13 01:00:54.485: INFO: Pod "downwardapi-volume-91a2b954-a644-444d-9de8-3f8b0bf4ead6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008891009s Nov 13 01:00:56.491: INFO: Pod "downwardapi-volume-91a2b954-a644-444d-9de8-3f8b0bf4ead6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01466999s STEP: Saw pod success Nov 13 01:00:56.491: INFO: Pod "downwardapi-volume-91a2b954-a644-444d-9de8-3f8b0bf4ead6" satisfied condition "Succeeded or Failed" Nov 13 01:00:56.493: INFO: Trying to get logs from node node2 pod downwardapi-volume-91a2b954-a644-444d-9de8-3f8b0bf4ead6 container client-container: STEP: delete the pod Nov 13 01:00:56.591: INFO: Waiting for pod downwardapi-volume-91a2b954-a644-444d-9de8-3f8b0bf4ead6 to disappear Nov 13 01:00:56.593: INFO: Pod downwardapi-volume-91a2b954-a644-444d-9de8-3f8b0bf4ead6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:56.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-395" for this suite. • [SLOW TEST:6.156 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":197,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:46.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1113 00:59:56.807975 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:00:58.827: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Nov 13 01:00:58.827: INFO: Deleting pod "simpletest-rc-to-be-deleted-45xcv" in namespace "gc-35" Nov 13 01:00:58.835: INFO: Deleting pod "simpletest-rc-to-be-deleted-6fjqv" in namespace "gc-35" Nov 13 01:00:58.842: INFO: Deleting pod "simpletest-rc-to-be-deleted-86xml" in namespace "gc-35" Nov 13 01:00:58.848: INFO: Deleting pod "simpletest-rc-to-be-deleted-c99mp" in namespace "gc-35" Nov 13 01:00:58.853: INFO: Deleting pod "simpletest-rc-to-be-deleted-jlpt2" in namespace "gc-35" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:00:58.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-35" for this suite. • [SLOW TEST:72.151 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":14,"skipped":292,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:37.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1113 00:59:37.947609 32 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:01.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-6996" for this suite. • [SLOW TEST:84.044 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":23,"skipped":551,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:01.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:02.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6630" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":24,"skipped":556,"failed":0} SS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:55.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Nov 13 01:00:55.637: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Nov 13 01:00:55.641: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Nov 13 01:00:55.641: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Nov 13 01:00:55.653: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Nov 13 01:00:55.653: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Nov 13 01:00:55.665: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Nov 13 01:00:55.665: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Nov 13 01:01:02.709: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:02.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-3242" for this suite. • [SLOW TEST:7.123 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":20,"skipped":246,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:55.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-3dbc53a6-168b-4434-83d5-65d3cbd6b142 STEP: Creating a pod to test consume secrets Nov 13 01:00:55.917: INFO: Waiting up to 5m0s for pod "pod-secrets-3b12857d-c088-4b41-bc8e-4b918e928481" in namespace "secrets-9402" to be "Succeeded or Failed" Nov 13 01:00:55.920: INFO: Pod "pod-secrets-3b12857d-c088-4b41-bc8e-4b918e928481": Phase="Pending", Reason="", readiness=false. Elapsed: 3.129537ms Nov 13 01:00:57.923: INFO: Pod "pod-secrets-3b12857d-c088-4b41-bc8e-4b918e928481": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006395812s Nov 13 01:00:59.926: INFO: Pod "pod-secrets-3b12857d-c088-4b41-bc8e-4b918e928481": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00962701s Nov 13 01:01:01.930: INFO: Pod "pod-secrets-3b12857d-c088-4b41-bc8e-4b918e928481": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012768086s Nov 13 01:01:03.938: INFO: Pod "pod-secrets-3b12857d-c088-4b41-bc8e-4b918e928481": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021504072s STEP: Saw pod success Nov 13 01:01:03.938: INFO: Pod "pod-secrets-3b12857d-c088-4b41-bc8e-4b918e928481" satisfied condition "Succeeded or Failed" Nov 13 01:01:03.940: INFO: Trying to get logs from node node1 pod pod-secrets-3b12857d-c088-4b41-bc8e-4b918e928481 container secret-volume-test: STEP: delete the pod Nov 13 01:01:03.956: INFO: Waiting for pod pod-secrets-3b12857d-c088-4b41-bc8e-4b918e928481 to disappear Nov 13 01:01:03.958: INFO: Pod pod-secrets-3b12857d-c088-4b41-bc8e-4b918e928481 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:03.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9402" for this suite. • [SLOW TEST:8.084 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":425,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:02.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Nov 13 01:01:02.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6490 create -f -' Nov 13 01:01:02.483: INFO: stderr: "" Nov 13 01:01:02.483: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Nov 13 01:01:03.487: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 01:01:03.487: INFO: Found 0 / 1 Nov 13 01:01:04.486: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 01:01:04.486: INFO: Found 0 / 1 Nov 13 01:01:05.486: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 01:01:05.486: INFO: Found 0 / 1 Nov 13 01:01:06.486: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 01:01:06.486: INFO: Found 0 / 1 Nov 13 01:01:07.487: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 01:01:07.487: INFO: Found 0 / 1 Nov 13 01:01:08.485: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 01:01:08.485: INFO: Found 0 / 1 Nov 13 01:01:09.488: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 01:01:09.488: INFO: Found 1 / 1 Nov 13 01:01:09.488: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Nov 13 01:01:09.490: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 01:01:09.490: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 13 01:01:09.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6490 patch pod agnhost-primary-pw267 -p {"metadata":{"annotations":{"x":"y"}}}' Nov 13 01:01:09.658: INFO: stderr: "" Nov 13 01:01:09.658: INFO: stdout: "pod/agnhost-primary-pw267 patched\n" STEP: checking annotations Nov 13 01:01:09.660: INFO: Selector matched 1 pods for map[app:agnhost] Nov 13 01:01:09.660: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:09.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6490" for this suite. • [SLOW TEST:7.648 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":25,"skipped":558,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:02.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-2e102531-b432-4913-b27a-f667a29cb105 STEP: Creating a pod to test consume secrets Nov 13 01:01:02.776: INFO: Waiting up to 5m0s for pod "pod-secrets-daefaabf-40f5-419b-817e-cff8e5e75f47" in namespace "secrets-3352" to be "Succeeded or Failed" Nov 13 01:01:02.778: INFO: Pod "pod-secrets-daefaabf-40f5-419b-817e-cff8e5e75f47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268746ms Nov 13 01:01:04.781: INFO: Pod "pod-secrets-daefaabf-40f5-419b-817e-cff8e5e75f47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005248531s Nov 13 01:01:06.785: INFO: Pod "pod-secrets-daefaabf-40f5-419b-817e-cff8e5e75f47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008363077s Nov 13 01:01:08.788: INFO: Pod "pod-secrets-daefaabf-40f5-419b-817e-cff8e5e75f47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012181931s Nov 13 01:01:10.794: INFO: Pod "pod-secrets-daefaabf-40f5-419b-817e-cff8e5e75f47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017532159s STEP: Saw pod success Nov 13 01:01:10.794: INFO: Pod "pod-secrets-daefaabf-40f5-419b-817e-cff8e5e75f47" satisfied condition "Succeeded or Failed" Nov 13 01:01:10.796: INFO: Trying to get logs from node node2 pod pod-secrets-daefaabf-40f5-419b-817e-cff8e5e75f47 container secret-env-test: STEP: delete the pod Nov 13 01:01:10.808: INFO: Waiting for pod pod-secrets-daefaabf-40f5-419b-817e-cff8e5e75f47 to disappear Nov 13 01:01:10.810: INFO: Pod pod-secrets-daefaabf-40f5-419b-817e-cff8e5e75f47 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:10.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3352" for this suite. • [SLOW TEST:8.078 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":247,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:55.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 01:00:55.873: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 01:00:57.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:00:59.892: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:01:01.887: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:01:03.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362055, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 01:01:06.896: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:01:06.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:15.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3154" for this suite. STEP: Destroying namespace "webhook-3154-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.677 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:04.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Nov 13 01:01:12.080: INFO: &Pod{ObjectMeta:{send-events-2f2faf16-c0bc-4814-a85b-b5f9bfb2de29 events-7592 4a7815fb-5052-4436-9486-f4253306aa65 72377 0 2021-11-13 01:01:04 +0000 UTC map[name:foo time:59296787] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.103" ], "mac": "a2:e1:79:a7:27:34", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.103" ], "mac": "a2:e1:79:a7:27:34", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-11-13 01:01:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-13 01:01:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-13 01:01:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.103\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vnhpm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vnhpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:01:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:01:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:01:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:01:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.103,StartTime:2021-11-13 01:01:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-13 01:01:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://19b3be875964b8f5ec407d00e76c513b8652028f30d089a04b72a3c1e97ef875,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Nov 13 01:01:14.086: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Nov 13 01:01:16.091: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:16.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7592" for this suite. • [SLOW TEST:12.066 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":32,"skipped":459,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:10.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6115.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6115.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6115.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6115.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6115.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6115.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 13 01:01:19.012: INFO: DNS probes using dns-6115/dns-test-23697502-1bb1-4a64-b78e-d716ef0a6916 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:19.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6115" for this suite. • [SLOW TEST:8.092 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":22,"skipped":305,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":11,"skipped":168,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:15.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Nov 13 01:01:15.063: INFO: Waiting up to 5m0s for pod "pod-dbfe555e-78a0-4c3f-b4ec-fddfb81f9286" in namespace "emptydir-8689" to be "Succeeded or Failed" Nov 13 01:01:15.066: INFO: Pod "pod-dbfe555e-78a0-4c3f-b4ec-fddfb81f9286": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165872ms Nov 13 01:01:17.070: INFO: Pod "pod-dbfe555e-78a0-4c3f-b4ec-fddfb81f9286": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006228443s Nov 13 01:01:19.073: INFO: Pod "pod-dbfe555e-78a0-4c3f-b4ec-fddfb81f9286": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00962655s STEP: Saw pod success Nov 13 01:01:19.073: INFO: Pod "pod-dbfe555e-78a0-4c3f-b4ec-fddfb81f9286" satisfied condition "Succeeded or Failed" Nov 13 01:01:19.077: INFO: Trying to get logs from node node1 pod pod-dbfe555e-78a0-4c3f-b4ec-fddfb81f9286 container test-container: STEP: delete the pod Nov 13 01:01:19.104: INFO: Waiting for pod pod-dbfe555e-78a0-4c3f-b4ec-fddfb81f9286 to disappear Nov 13 01:01:19.106: INFO: Pod pod-dbfe555e-78a0-4c3f-b4ec-fddfb81f9286 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:19.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8689" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":168,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:19.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:19.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9332" for this suite. •S ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":23,"skipped":308,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:19.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:19.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-5192" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":13,"skipped":187,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:16.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-a4b3a241-1d49-4d8b-9f20-2a7131c41919 STEP: Creating a pod to test consume secrets Nov 13 01:01:16.168: INFO: Waiting up to 5m0s for pod "pod-secrets-b61566a3-1361-48b0-ad8f-d8bf301beb7b" in namespace "secrets-1838" to be "Succeeded or Failed" Nov 13 01:01:16.171: INFO: Pod "pod-secrets-b61566a3-1361-48b0-ad8f-d8bf301beb7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.719558ms Nov 13 01:01:18.177: INFO: Pod "pod-secrets-b61566a3-1361-48b0-ad8f-d8bf301beb7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00866881s Nov 13 01:01:20.179: INFO: Pod "pod-secrets-b61566a3-1361-48b0-ad8f-d8bf301beb7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011569086s STEP: Saw pod success Nov 13 01:01:20.180: INFO: Pod "pod-secrets-b61566a3-1361-48b0-ad8f-d8bf301beb7b" satisfied condition "Succeeded or Failed" Nov 13 01:01:20.182: INFO: Trying to get logs from node node1 pod pod-secrets-b61566a3-1361-48b0-ad8f-d8bf301beb7b container secret-volume-test: STEP: delete the pod Nov 13 01:01:20.196: INFO: Waiting for pod pod-secrets-b61566a3-1361-48b0-ad8f-d8bf301beb7b to disappear Nov 13 01:01:20.198: INFO: Pod pod-secrets-b61566a3-1361-48b0-ad8f-d8bf301beb7b no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:20.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1838" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":470,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:56.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod Nov 13 01:00:56.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5253 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Nov 13 01:00:56.814: INFO: stderr: "" Nov 13 01:00:56.814: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. Nov 13 01:00:56.814: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Nov 13 01:00:56.814: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5253" to be "running and ready, or succeeded" Nov 13 01:00:56.818: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.275121ms Nov 13 01:00:58.824: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010278561s Nov 13 01:01:00.828: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014310599s Nov 13 01:01:02.832: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018048964s Nov 13 01:01:04.836: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.022212434s Nov 13 01:01:04.836: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Nov 13 01:01:04.836: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Nov 13 01:01:04.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5253 logs logs-generator logs-generator' Nov 13 01:01:05.011: INFO: stderr: "" Nov 13 01:01:05.011: INFO: stdout: "I1113 01:01:02.123744 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/s77 256\nI1113 01:01:02.324242 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/bnfz 543\nI1113 01:01:02.524573 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/945t 560\nI1113 01:01:02.723813 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/wtf 473\nI1113 01:01:02.924338 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/s6ql 411\nI1113 01:01:03.124725 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/b4fs 545\nI1113 01:01:03.324132 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/dzp 555\nI1113 01:01:03.524263 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/k6n9 400\nI1113 01:01:03.724699 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/89ch 305\nI1113 01:01:03.923970 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/wmcc 291\nI1113 01:01:04.124399 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/vgv 394\nI1113 01:01:04.324903 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/m5z 359\nI1113 01:01:04.524347 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/q7w 253\nI1113 01:01:04.724768 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/wtnx 351\nI1113 01:01:04.924134 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/257 251\n" STEP: limiting log lines Nov 13 01:01:05.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5253 logs logs-generator logs-generator --tail=1' Nov 13 01:01:05.197: INFO: stderr: "" Nov 13 01:01:05.197: INFO: stdout: "I1113 01:01:05.124549 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/zp66 270\n" Nov 13 01:01:05.197: INFO: got output "I1113 01:01:05.124549 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/zp66 270\n" STEP: limiting log bytes Nov 13 01:01:05.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5253 logs logs-generator logs-generator --limit-bytes=1' Nov 13 01:01:05.380: INFO: stderr: "" Nov 13 01:01:05.380: INFO: stdout: "I" Nov 13 01:01:05.380: INFO: got output "I" STEP: exposing timestamps Nov 13 01:01:05.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5253 logs logs-generator logs-generator --tail=1 --timestamps' Nov 13 01:01:05.543: INFO: stderr: "" Nov 13 01:01:05.543: INFO: stdout: "2021-11-13T01:01:05.524489191Z I1113 01:01:05.524333 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/pgg 388\n" Nov 13 01:01:05.543: INFO: got output "2021-11-13T01:01:05.524489191Z I1113 01:01:05.524333 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/pgg 388\n" STEP: restricting to a time range Nov 13 01:01:08.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5253 logs logs-generator logs-generator --since=1s' Nov 13 01:01:08.211: INFO: stderr: "" Nov 13 01:01:08.211: INFO: stdout: "I1113 01:01:07.324531 1 logs_generator.go:76] 26 GET /api/v1/namespaces/kube-system/pods/zbj 443\nI1113 01:01:07.523845 1 logs_generator.go:76] 27 GET /api/v1/namespaces/kube-system/pods/dkhv 248\nI1113 01:01:07.724252 1 logs_generator.go:76] 28 POST /api/v1/namespaces/kube-system/pods/7nkr 410\nI1113 01:01:07.924344 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/kube-system/pods/jmgm 302\nI1113 01:01:08.124746 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/28j2 291\n" Nov 13 01:01:08.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5253 logs logs-generator logs-generator --since=24h' Nov 13 01:01:08.387: INFO: stderr: "" Nov 13 01:01:08.387: INFO: stdout: "I1113 01:01:02.123744 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/s77 256\nI1113 01:01:02.324242 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/bnfz 543\nI1113 01:01:02.524573 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/945t 560\nI1113 01:01:02.723813 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/wtf 473\nI1113 01:01:02.924338 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/s6ql 411\nI1113 01:01:03.124725 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/b4fs 545\nI1113 01:01:03.324132 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/dzp 555\nI1113 01:01:03.524263 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/k6n9 400\nI1113 01:01:03.724699 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/89ch 305\nI1113 01:01:03.923970 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/wmcc 291\nI1113 01:01:04.124399 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/vgv 394\nI1113 01:01:04.324903 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/m5z 359\nI1113 01:01:04.524347 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/q7w 253\nI1113 01:01:04.724768 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/wtnx 351\nI1113 01:01:04.924134 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/257 251\nI1113 01:01:05.124549 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/zp66 270\nI1113 01:01:05.323924 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/zhc6 524\nI1113 01:01:05.524333 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/pgg 388\nI1113 01:01:05.724838 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/fx9h 563\nI1113 01:01:05.923970 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/ls9 344\nI1113 01:01:06.124564 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/sn5 221\nI1113 01:01:06.323963 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/l5v 583\nI1113 01:01:06.524510 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/nk9 205\nI1113 01:01:06.723903 1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/2l2 267\nI1113 01:01:06.924363 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/d6b5 551\nI1113 01:01:07.124101 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/47p7 294\nI1113 01:01:07.324531 1 logs_generator.go:76] 26 GET /api/v1/namespaces/kube-system/pods/zbj 443\nI1113 01:01:07.523845 1 logs_generator.go:76] 27 GET /api/v1/namespaces/kube-system/pods/dkhv 248\nI1113 01:01:07.724252 1 logs_generator.go:76] 28 POST /api/v1/namespaces/kube-system/pods/7nkr 410\nI1113 01:01:07.924344 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/kube-system/pods/jmgm 302\nI1113 01:01:08.124746 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/28j2 291\nI1113 01:01:08.324010 1 logs_generator.go:76] 31 POST /api/v1/namespaces/default/pods/mssw 531\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 Nov 13 01:01:08.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5253 delete pod logs-generator' Nov 13 01:01:21.374: INFO: stderr: "" Nov 13 01:01:21.374: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:21.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5253" for this suite. • [SLOW TEST:24.762 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":13,"skipped":205,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:35.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9428, will wait for the garbage collector to delete the pods Nov 13 01:00:41.312: INFO: Deleting Job.batch foo took: 4.302657ms Nov 13 01:00:41.412: INFO: Terminating Job.batch foo pods took: 100.139153ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:21.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9428" for this suite. • [SLOW TEST:46.201 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":21,"skipped":384,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:27.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-8368 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-8368 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8368 Nov 13 01:00:28.004: INFO: Found 0 stateful pods, waiting for 1 Nov 13 01:00:38.008: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Nov 13 01:00:38.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8368 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 13 01:00:38.259: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 13 01:00:38.259: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 13 01:00:38.259: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 13 01:00:38.262: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 13 01:00:48.267: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 13 01:00:48.267: INFO: Waiting for statefulset status.replicas updated to 0 Nov 13 01:00:48.279: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:00:48.279: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC }] Nov 13 01:00:48.279: INFO: Nov 13 01:00:48.279: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 13 01:00:49.287: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996152501s Nov 13 01:00:50.291: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987790293s Nov 13 01:00:51.296: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982443218s Nov 13 01:00:52.302: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.977559651s Nov 13 01:00:53.306: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973460997s Nov 13 01:00:54.310: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.969193537s Nov 13 01:00:55.314: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.96487294s Nov 13 01:00:56.321: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.959213523s Nov 13 01:00:57.325: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.92012ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8368 Nov 13 01:00:58.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8368 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 13 01:00:58.733: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 13 01:00:58.733: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 13 01:00:58.733: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 13 01:00:58.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8368 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 13 01:00:59.112: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Nov 13 01:00:59.112: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 13 01:00:59.112: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 13 01:00:59.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8368 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 13 01:00:59.723: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Nov 13 01:00:59.723: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 13 01:00:59.723: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 13 01:00:59.726: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 13 01:00:59.726: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 13 01:00:59.726: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Nov 13 01:00:59.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8368 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 13 01:01:00.539: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 13 01:01:00.540: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 13 01:01:00.540: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 13 01:01:00.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8368 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 13 01:01:01.167: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 13 01:01:01.167: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 13 01:01:01.167: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 13 01:01:01.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8368 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 13 01:01:01.657: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 13 01:01:01.657: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 13 01:01:01.657: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 13 01:01:01.657: INFO: Waiting for statefulset status.replicas updated to 0 Nov 13 01:01:01.660: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 13 01:01:11.668: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 13 01:01:11.668: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Nov 13 01:01:11.668: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Nov 13 01:01:11.679: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:01:11.679: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC }] Nov 13 01:01:11.679: INFO: ss-1 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:11.679: INFO: ss-2 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:11.679: INFO: Nov 13 01:01:11.679: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 13 01:01:12.682: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:01:12.682: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC }] Nov 13 01:01:12.682: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:12.682: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:12.683: INFO: Nov 13 01:01:12.683: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 13 01:01:13.686: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:01:13.686: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC }] Nov 13 01:01:13.686: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:13.686: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:13.686: INFO: Nov 13 01:01:13.686: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 13 01:01:14.690: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:01:14.690: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC }] Nov 13 01:01:14.690: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:14.690: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:14.690: INFO: Nov 13 01:01:14.690: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 13 01:01:15.695: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:01:15.695: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC }] Nov 13 01:01:15.695: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:15.695: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:15.695: INFO: Nov 13 01:01:15.695: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 13 01:01:16.700: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:01:16.700: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC }] Nov 13 01:01:16.700: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:16.700: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:16.700: INFO: Nov 13 01:01:16.700: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 13 01:01:17.706: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:01:17.706: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC }] Nov 13 01:01:17.706: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:17.706: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:17.706: INFO: Nov 13 01:01:17.706: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 13 01:01:18.711: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:01:18.711: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC }] Nov 13 01:01:18.712: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:18.712: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:18.712: INFO: Nov 13 01:01:18.712: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 13 01:01:19.715: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:01:19.715: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC }] Nov 13 01:01:19.715: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:19.715: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:19.715: INFO: Nov 13 01:01:19.715: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 13 01:01:20.721: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:01:20.721: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:28 +0000 UTC }] Nov 13 01:01:20.721: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:20.721: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:00:48 +0000 UTC }] Nov 13 01:01:20.721: INFO: Nov 13 01:01:20.721: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8368 Nov 13 01:01:21.726: INFO: Scaling statefulset ss to 0 Nov 13 01:01:21.740: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Nov 13 01:01:21.742: INFO: Deleting all statefulset in ns statefulset-8368 Nov 13 01:01:21.744: INFO: Scaling statefulset ss to 0 Nov 13 01:01:21.753: INFO: Waiting for statefulset status.replicas updated to 0 Nov 13 01:01:21.756: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:21.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8368" for this suite. • [SLOW TEST:53.803 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":14,"skipped":273,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:09.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-dd2b9317-2aa5-46ca-a231-e725f83bb22c STEP: Creating secret with name s-test-opt-upd-4d958c79-d1e7-49a8-8155-1c247d815df6 STEP: Creating the pod Nov 13 01:01:09.761: INFO: The status of Pod pod-projected-secrets-6fdcf19f-593a-4c13-82dd-b62fc41cd2d4 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:01:11.765: INFO: The status of Pod pod-projected-secrets-6fdcf19f-593a-4c13-82dd-b62fc41cd2d4 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:01:13.770: INFO: The status of Pod pod-projected-secrets-6fdcf19f-593a-4c13-82dd-b62fc41cd2d4 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:01:15.767: INFO: The status of Pod pod-projected-secrets-6fdcf19f-593a-4c13-82dd-b62fc41cd2d4 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:01:17.765: INFO: The status of Pod pod-projected-secrets-6fdcf19f-593a-4c13-82dd-b62fc41cd2d4 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-dd2b9317-2aa5-46ca-a231-e725f83bb22c STEP: Updating secret s-test-opt-upd-4d958c79-d1e7-49a8-8155-1c247d815df6 STEP: Creating secret with name s-test-opt-create-07554961-c320-4829-856a-7c6c4844839f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:21.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4792" for this suite. • [SLOW TEST:12.130 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":573,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:21.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy Nov 13 01:01:21.803: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4272 proxy --unix-socket=/tmp/kubectl-proxy-unix021493455/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:21.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4272" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":15,"skipped":275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:19.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 01:01:19.169: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d926383-f56e-4fd7-b5fb-0526197d31e9" in namespace "downward-api-8376" to be "Succeeded or Failed" Nov 13 01:01:19.171: INFO: Pod "downwardapi-volume-4d926383-f56e-4fd7-b5fb-0526197d31e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143854ms Nov 13 01:01:21.175: INFO: Pod "downwardapi-volume-4d926383-f56e-4fd7-b5fb-0526197d31e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006046577s Nov 13 01:01:23.180: INFO: Pod "downwardapi-volume-4d926383-f56e-4fd7-b5fb-0526197d31e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011073717s Nov 13 01:01:25.183: INFO: Pod "downwardapi-volume-4d926383-f56e-4fd7-b5fb-0526197d31e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014088343s STEP: Saw pod success Nov 13 01:01:25.183: INFO: Pod "downwardapi-volume-4d926383-f56e-4fd7-b5fb-0526197d31e9" satisfied condition "Succeeded or Failed" Nov 13 01:01:25.186: INFO: Trying to get logs from node node2 pod downwardapi-volume-4d926383-f56e-4fd7-b5fb-0526197d31e9 container client-container: STEP: delete the pod Nov 13 01:01:25.199: INFO: Waiting for pod downwardapi-volume-4d926383-f56e-4fd7-b5fb-0526197d31e9 to disappear Nov 13 01:01:25.200: INFO: Pod downwardapi-volume-4d926383-f56e-4fd7-b5fb-0526197d31e9 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:25.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8376" for this suite. • [SLOW TEST:6.070 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":317,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:21.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 01:01:21.480: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27d74362-5e9a-471c-b671-9c10b3df7295" in namespace "downward-api-3571" to be "Succeeded or Failed" Nov 13 01:01:21.482: INFO: Pod "downwardapi-volume-27d74362-5e9a-471c-b671-9c10b3df7295": Phase="Pending", Reason="", readiness=false. Elapsed: 2.498269ms Nov 13 01:01:23.486: INFO: Pod "downwardapi-volume-27d74362-5e9a-471c-b671-9c10b3df7295": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00616924s Nov 13 01:01:25.489: INFO: Pod "downwardapi-volume-27d74362-5e9a-471c-b671-9c10b3df7295": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009679245s STEP: Saw pod success Nov 13 01:01:25.489: INFO: Pod "downwardapi-volume-27d74362-5e9a-471c-b671-9c10b3df7295" satisfied condition "Succeeded or Failed" Nov 13 01:01:25.491: INFO: Trying to get logs from node node1 pod downwardapi-volume-27d74362-5e9a-471c-b671-9c10b3df7295 container client-container: STEP: delete the pod Nov 13 01:01:25.504: INFO: Waiting for pod downwardapi-volume-27d74362-5e9a-471c-b671-9c10b3df7295 to disappear Nov 13 01:01:25.505: INFO: Pod downwardapi-volume-27d74362-5e9a-471c-b671-9c10b3df7295 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:25.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3571" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":393,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:20.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-98049a89-3ea5-49f0-a183-ec0862fe6d81 STEP: Creating the pod Nov 13 01:01:20.279: INFO: The status of Pod pod-projected-configmaps-d6ae1250-0c52-4aed-b8d5-8843f1ee2074 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:01:22.282: INFO: The status of Pod pod-projected-configmaps-d6ae1250-0c52-4aed-b8d5-8843f1ee2074 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:01:24.284: INFO: The status of Pod pod-projected-configmaps-d6ae1250-0c52-4aed-b8d5-8843f1ee2074 is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-98049a89-3ea5-49f0-a183-ec0862fe6d81 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:26.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7695" for this suite. • [SLOW TEST:6.465 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":483,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:58.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-dj8b STEP: Creating a pod to test atomic-volume-subpath Nov 13 01:00:58.929: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-dj8b" in namespace "subpath-6222" to be "Succeeded or Failed" Nov 13 01:00:58.931: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.338342ms Nov 13 01:01:00.934: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005444832s Nov 13 01:01:02.939: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00956669s Nov 13 01:01:04.942: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012649332s Nov 13 01:01:06.945: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016146333s Nov 13 01:01:08.951: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Running", Reason="", readiness=true. Elapsed: 10.021789639s Nov 13 01:01:10.953: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Running", Reason="", readiness=true. Elapsed: 12.024474849s Nov 13 01:01:12.958: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Running", Reason="", readiness=true. Elapsed: 14.028519125s Nov 13 01:01:14.961: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Running", Reason="", readiness=true. Elapsed: 16.031826256s Nov 13 01:01:16.964: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Running", Reason="", readiness=true. Elapsed: 18.034591874s Nov 13 01:01:18.968: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Running", Reason="", readiness=true. Elapsed: 20.039163163s Nov 13 01:01:20.971: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Running", Reason="", readiness=true. Elapsed: 22.042243635s Nov 13 01:01:22.976: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Running", Reason="", readiness=true. Elapsed: 24.046542744s Nov 13 01:01:24.979: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Running", Reason="", readiness=true. Elapsed: 26.049724697s Nov 13 01:01:26.982: INFO: Pod "pod-subpath-test-downwardapi-dj8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.053429809s STEP: Saw pod success Nov 13 01:01:26.983: INFO: Pod "pod-subpath-test-downwardapi-dj8b" satisfied condition "Succeeded or Failed" Nov 13 01:01:26.985: INFO: Trying to get logs from node node2 pod pod-subpath-test-downwardapi-dj8b container test-container-subpath-downwardapi-dj8b: STEP: delete the pod Nov 13 01:01:27.041: INFO: Waiting for pod pod-subpath-test-downwardapi-dj8b to disappear Nov 13 01:01:27.043: INFO: Pod pod-subpath-test-downwardapi-dj8b no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-dj8b Nov 13 01:01:27.043: INFO: Deleting pod "pod-subpath-test-downwardapi-dj8b" in namespace "subpath-6222" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:27.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6222" for this suite. • [SLOW TEST:28.179 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":294,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:27.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:27.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-1515" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":16,"skipped":302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:27.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Nov 13 01:01:27.189: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9697 cc852365-942d-4ed6-9e9e-2da5705da0c4 73020 0 2021-11-13 01:01:27 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-11-13 01:01:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 13 01:01:27.190: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9697 cc852365-942d-4ed6-9e9e-2da5705da0c4 73021 0 2021-11-13 01:01:27 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-11-13 01:01:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:27.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9697" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":17,"skipped":326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:21.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:28.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-4724" for this suite. • [SLOW TEST:6.071 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":16,"skipped":337,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:21.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 01:01:21.938: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 01:01:23.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362081, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362081, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362081, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362081, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:01:25.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362081, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362081, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362081, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362081, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:01:27.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362081, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362081, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362081, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362081, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 01:01:30.963: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:30.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2105" for this suite. STEP: Destroying namespace "webhook-2105-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.571 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":14,"skipped":235,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:25.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-3ef35a96-a392-4735-a70d-7b0b38e46f64 STEP: Creating a pod to test consume secrets Nov 13 01:01:25.276: INFO: Waiting up to 5m0s for pod "pod-secrets-cfaaf2ad-0df4-4b8f-aa32-1021080f103a" in namespace "secrets-6670" to be "Succeeded or Failed" Nov 13 01:01:25.279: INFO: Pod "pod-secrets-cfaaf2ad-0df4-4b8f-aa32-1021080f103a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492667ms Nov 13 01:01:27.283: INFO: Pod "pod-secrets-cfaaf2ad-0df4-4b8f-aa32-1021080f103a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00622569s Nov 13 01:01:29.285: INFO: Pod "pod-secrets-cfaaf2ad-0df4-4b8f-aa32-1021080f103a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009011894s Nov 13 01:01:31.289: INFO: Pod "pod-secrets-cfaaf2ad-0df4-4b8f-aa32-1021080f103a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012469558s Nov 13 01:01:33.293: INFO: Pod "pod-secrets-cfaaf2ad-0df4-4b8f-aa32-1021080f103a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017026709s STEP: Saw pod success Nov 13 01:01:33.293: INFO: Pod "pod-secrets-cfaaf2ad-0df4-4b8f-aa32-1021080f103a" satisfied condition "Succeeded or Failed" Nov 13 01:01:33.295: INFO: Trying to get logs from node node2 pod pod-secrets-cfaaf2ad-0df4-4b8f-aa32-1021080f103a container secret-volume-test: STEP: delete the pod Nov 13 01:01:33.309: INFO: Waiting for pod pod-secrets-cfaaf2ad-0df4-4b8f-aa32-1021080f103a to disappear Nov 13 01:01:33.311: INFO: Pod pod-secrets-cfaaf2ad-0df4-4b8f-aa32-1021080f103a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:33.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6670" for this suite. • [SLOW TEST:8.076 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":337,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:33.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:33.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4139" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":26,"skipped":344,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 00:59:51.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1113 01:00:31.558601 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:01:33.574: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Nov 13 01:01:33.574: INFO: Deleting pod "simpletest.rc-4j92t" in namespace "gc-7461" Nov 13 01:01:33.581: INFO: Deleting pod "simpletest.rc-6zj7b" in namespace "gc-7461" Nov 13 01:01:33.586: INFO: Deleting pod "simpletest.rc-bhfm2" in namespace "gc-7461" Nov 13 01:01:33.592: INFO: Deleting pod "simpletest.rc-bz29t" in namespace "gc-7461" Nov 13 01:01:33.597: INFO: Deleting pod "simpletest.rc-fn44r" in namespace "gc-7461" Nov 13 01:01:33.604: INFO: Deleting pod "simpletest.rc-fv568" in namespace "gc-7461" Nov 13 01:01:33.610: INFO: Deleting pod "simpletest.rc-q8w5r" in namespace "gc-7461" Nov 13 01:01:33.617: INFO: Deleting pod "simpletest.rc-vfjb4" in namespace "gc-7461" Nov 13 01:01:33.621: INFO: Deleting pod "simpletest.rc-wx56l" in namespace "gc-7461" Nov 13 01:01:33.627: INFO: Deleting pod "simpletest.rc-xz246" in namespace "gc-7461" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:33.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7461" for this suite. • [SLOW TEST:102.160 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":18,"skipped":235,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:00:36.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Nov 13 01:00:36.968: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 a402901f-d497-41b7-9d48-2a0e48889c77 71179 0 2021-11-13 01:00:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-13 01:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 13 01:00:36.969: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 a402901f-d497-41b7-9d48-2a0e48889c77 71179 0 2021-11-13 01:00:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-13 01:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Nov 13 01:00:46.975: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 a402901f-d497-41b7-9d48-2a0e48889c77 71496 0 2021-11-13 01:00:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-13 01:00:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 13 01:00:46.976: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 a402901f-d497-41b7-9d48-2a0e48889c77 71496 0 2021-11-13 01:00:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-13 01:00:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Nov 13 01:00:56.983: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 a402901f-d497-41b7-9d48-2a0e48889c77 71901 0 2021-11-13 01:00:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-13 01:00:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 13 01:00:56.984: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 a402901f-d497-41b7-9d48-2a0e48889c77 71901 0 2021-11-13 01:00:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-13 01:00:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Nov 13 01:01:06.989: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 a402901f-d497-41b7-9d48-2a0e48889c77 72252 0 2021-11-13 01:00:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-13 01:00:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 13 01:01:06.989: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8951 a402901f-d497-41b7-9d48-2a0e48889c77 72252 0 2021-11-13 01:00:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-11-13 01:00:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Nov 13 01:01:16.994: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8951 fff1d1ce-733e-4dbb-8439-ce4e281c73a4 72526 0 2021-11-13 01:01:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-11-13 01:01:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 13 01:01:16.994: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8951 fff1d1ce-733e-4dbb-8439-ce4e281c73a4 72526 0 2021-11-13 01:01:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-11-13 01:01:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Nov 13 01:01:26.998: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8951 fff1d1ce-733e-4dbb-8439-ce4e281c73a4 72984 0 2021-11-13 01:01:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-11-13 01:01:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 13 01:01:26.999: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8951 fff1d1ce-733e-4dbb-8439-ce4e281c73a4 72984 0 2021-11-13 01:01:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-11-13 01:01:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:36.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8951" for this suite. • [SLOW TEST:60.064 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":25,"skipped":503,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:27.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-2eb60430-196f-42b0-abac-f03208758c76 STEP: Creating a pod to test consume configMaps Nov 13 01:01:27.280: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6424be0b-f046-4a35-994f-d1afe84b3e46" in namespace "projected-2802" to be "Succeeded or Failed" Nov 13 01:01:27.282: INFO: Pod "pod-projected-configmaps-6424be0b-f046-4a35-994f-d1afe84b3e46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.516944ms Nov 13 01:01:29.285: INFO: Pod "pod-projected-configmaps-6424be0b-f046-4a35-994f-d1afe84b3e46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005305993s Nov 13 01:01:31.288: INFO: Pod "pod-projected-configmaps-6424be0b-f046-4a35-994f-d1afe84b3e46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008777505s Nov 13 01:01:33.293: INFO: Pod "pod-projected-configmaps-6424be0b-f046-4a35-994f-d1afe84b3e46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013441297s Nov 13 01:01:35.296: INFO: Pod "pod-projected-configmaps-6424be0b-f046-4a35-994f-d1afe84b3e46": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016821332s Nov 13 01:01:37.300: INFO: Pod "pod-projected-configmaps-6424be0b-f046-4a35-994f-d1afe84b3e46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.019989439s STEP: Saw pod success Nov 13 01:01:37.300: INFO: Pod "pod-projected-configmaps-6424be0b-f046-4a35-994f-d1afe84b3e46" satisfied condition "Succeeded or Failed" Nov 13 01:01:37.303: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-6424be0b-f046-4a35-994f-d1afe84b3e46 container agnhost-container: STEP: delete the pod Nov 13 01:01:37.318: INFO: Waiting for pod pod-projected-configmaps-6424be0b-f046-4a35-994f-d1afe84b3e46 to disappear Nov 13 01:01:37.320: INFO: Pod pod-projected-configmaps-6424be0b-f046-4a35-994f-d1afe84b3e46 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:37.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2802" for this suite. • [SLOW TEST:10.084 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":351,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:21.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:01:27.891: INFO: Deleting pod "var-expansion-a0c44430-4b88-4685-abc2-f24b070865ff" in namespace "var-expansion-6289" Nov 13 01:01:27.896: INFO: Wait up to 5m0s for pod "var-expansion-a0c44430-4b88-4685-abc2-f24b070865ff" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:37.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6289" for this suite. • [SLOW TEST:16.060 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":27,"skipped":581,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:37.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:38.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2810" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":28,"skipped":603,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:38.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 13 01:01:41.091: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:41.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4225" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":606,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:28.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:01:28.098: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Nov 13 01:01:36.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4894 --namespace=crd-publish-openapi-4894 create -f -' Nov 13 01:01:37.071: INFO: stderr: "" Nov 13 01:01:37.071: INFO: stdout: "e2e-test-crd-publish-openapi-7323-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Nov 13 01:01:37.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4894 --namespace=crd-publish-openapi-4894 delete e2e-test-crd-publish-openapi-7323-crds test-cr' Nov 13 01:01:37.256: INFO: stderr: "" Nov 13 01:01:37.256: INFO: stdout: "e2e-test-crd-publish-openapi-7323-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Nov 13 01:01:37.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4894 --namespace=crd-publish-openapi-4894 apply -f -' Nov 13 01:01:37.630: INFO: stderr: "" Nov 13 01:01:37.630: INFO: stdout: "e2e-test-crd-publish-openapi-7323-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Nov 13 01:01:37.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4894 --namespace=crd-publish-openapi-4894 delete e2e-test-crd-publish-openapi-7323-crds test-cr' Nov 13 01:01:37.804: INFO: stderr: "" Nov 13 01:01:37.804: INFO: stdout: "e2e-test-crd-publish-openapi-7323-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Nov 13 01:01:37.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4894 explain e2e-test-crd-publish-openapi-7323-crds' Nov 13 01:01:38.147: INFO: stderr: "" Nov 13 01:01:38.147: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7323-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:41.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4894" for this suite. • [SLOW TEST:13.602 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":17,"skipped":354,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:33.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 01:01:33.705: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 01:01:35.714: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362093, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362093, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362093, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362093, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:01:37.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362093, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362093, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362093, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362093, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:01:39.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362093, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362093, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362093, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362093, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 01:01:42.723: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:42.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9374" for this suite. STEP: Destroying namespace "webhook-9374-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.477 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":27,"skipped":345,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:37.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-769d69d9-9596-4e3f-8bdf-f269882e2373 STEP: Creating a pod to test consume configMaps Nov 13 01:01:37.060: INFO: Waiting up to 5m0s for pod "pod-configmaps-8e213bed-8b17-437b-9ec4-cd3361b64b76" in namespace "configmap-1409" to be "Succeeded or Failed" Nov 13 01:01:37.063: INFO: Pod "pod-configmaps-8e213bed-8b17-437b-9ec4-cd3361b64b76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.446756ms Nov 13 01:01:39.067: INFO: Pod "pod-configmaps-8e213bed-8b17-437b-9ec4-cd3361b64b76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006159578s Nov 13 01:01:41.070: INFO: Pod "pod-configmaps-8e213bed-8b17-437b-9ec4-cd3361b64b76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009102993s Nov 13 01:01:43.073: INFO: Pod "pod-configmaps-8e213bed-8b17-437b-9ec4-cd3361b64b76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012845848s STEP: Saw pod success Nov 13 01:01:43.073: INFO: Pod "pod-configmaps-8e213bed-8b17-437b-9ec4-cd3361b64b76" satisfied condition "Succeeded or Failed" Nov 13 01:01:43.075: INFO: Trying to get logs from node node2 pod pod-configmaps-8e213bed-8b17-437b-9ec4-cd3361b64b76 container configmap-volume-test: STEP: delete the pod Nov 13 01:01:43.087: INFO: Waiting for pod pod-configmaps-8e213bed-8b17-437b-9ec4-cd3361b64b76 to disappear Nov 13 01:01:43.088: INFO: Pod pod-configmaps-8e213bed-8b17-437b-9ec4-cd3361b64b76 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:43.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1409" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":509,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:41.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-d1421df1-083e-4ef1-88bd-e6df78ca6f07 STEP: Creating a pod to test consume secrets Nov 13 01:01:41.215: INFO: Waiting up to 5m0s for pod "pod-secrets-eaa54796-e62b-4f1a-aa5e-d4a68bcd5ca4" in namespace "secrets-6213" to be "Succeeded or Failed" Nov 13 01:01:41.217: INFO: Pod "pod-secrets-eaa54796-e62b-4f1a-aa5e-d4a68bcd5ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.472797ms Nov 13 01:01:43.220: INFO: Pod "pod-secrets-eaa54796-e62b-4f1a-aa5e-d4a68bcd5ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005535233s Nov 13 01:01:45.224: INFO: Pod "pod-secrets-eaa54796-e62b-4f1a-aa5e-d4a68bcd5ca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00919335s STEP: Saw pod success Nov 13 01:01:45.224: INFO: Pod "pod-secrets-eaa54796-e62b-4f1a-aa5e-d4a68bcd5ca4" satisfied condition "Succeeded or Failed" Nov 13 01:01:45.227: INFO: Trying to get logs from node node1 pod pod-secrets-eaa54796-e62b-4f1a-aa5e-d4a68bcd5ca4 container secret-volume-test: STEP: delete the pod Nov 13 01:01:45.240: INFO: Waiting for pod pod-secrets-eaa54796-e62b-4f1a-aa5e-d4a68bcd5ca4 to disappear Nov 13 01:01:45.244: INFO: Pod pod-secrets-eaa54796-e62b-4f1a-aa5e-d4a68bcd5ca4 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:45.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6213" for this suite. STEP: Destroying namespace "secret-namespace-5628" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":639,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:41.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:45.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-1692" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":18,"skipped":365,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:33.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-6643 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6643 to expose endpoints map[] Nov 13 01:01:33.675: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Nov 13 01:01:34.683: INFO: successfully validated that service multi-endpoint-test in namespace services-6643 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6643 Nov 13 01:01:34.698: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:01:36.701: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:01:38.701: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:01:40.701: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6643 to expose endpoints map[pod1:[100]] Nov 13 01:01:40.711: INFO: successfully validated that service multi-endpoint-test in namespace services-6643 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-6643 Nov 13 01:01:40.723: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:01:42.729: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:01:44.726: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:01:46.725: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6643 to expose endpoints map[pod1:[100] pod2:[101]] Nov 13 01:01:46.738: INFO: successfully validated that service multi-endpoint-test in namespace services-6643 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-6643 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6643 to expose endpoints map[pod2:[101]] Nov 13 01:01:46.753: INFO: successfully validated that service multi-endpoint-test in namespace services-6643 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-6643 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6643 to expose endpoints map[] Nov 13 01:01:46.763: INFO: successfully validated that service multi-endpoint-test in namespace services-6643 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:46.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6643" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:13.131 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":19,"skipped":238,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:45.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Nov 13 01:01:45.819: INFO: Waiting up to 5m0s for pod "pod-66d936c6-1543-4969-9248-c5a2b10facbe" in namespace "emptydir-4748" to be "Succeeded or Failed" Nov 13 01:01:45.821: INFO: Pod "pod-66d936c6-1543-4969-9248-c5a2b10facbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057719ms Nov 13 01:01:47.826: INFO: Pod "pod-66d936c6-1543-4969-9248-c5a2b10facbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006591101s Nov 13 01:01:49.829: INFO: Pod "pod-66d936c6-1543-4969-9248-c5a2b10facbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010033225s STEP: Saw pod success Nov 13 01:01:49.830: INFO: Pod "pod-66d936c6-1543-4969-9248-c5a2b10facbe" satisfied condition "Succeeded or Failed" Nov 13 01:01:49.832: INFO: Trying to get logs from node node1 pod pod-66d936c6-1543-4969-9248-c5a2b10facbe container test-container: STEP: delete the pod Nov 13 01:01:49.842: INFO: Waiting for pod pod-66d936c6-1543-4969-9248-c5a2b10facbe to disappear Nov 13 01:01:49.844: INFO: Pod pod-66d936c6-1543-4969-9248-c5a2b10facbe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:49.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4748" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":374,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:46.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod Nov 13 01:01:46.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2429 create -f -' Nov 13 01:01:47.259: INFO: stderr: "" Nov 13 01:01:47.259: INFO: stdout: "pod/pause created\n" Nov 13 01:01:47.259: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Nov 13 01:01:47.259: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2429" to be "running and ready" Nov 13 01:01:47.261: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258774ms Nov 13 01:01:49.265: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005599237s Nov 13 01:01:51.269: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.009617728s Nov 13 01:01:51.269: INFO: Pod "pause" satisfied condition "running and ready" Nov 13 01:01:51.269: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod Nov 13 01:01:51.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2429 label pods pause testing-label=testing-label-value' Nov 13 01:01:51.438: INFO: stderr: "" Nov 13 01:01:51.438: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Nov 13 01:01:51.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2429 get pod pause -L testing-label' Nov 13 01:01:51.603: INFO: stderr: "" Nov 13 01:01:51.603: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Nov 13 01:01:51.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2429 label pods pause testing-label-' Nov 13 01:01:51.755: INFO: stderr: "" Nov 13 01:01:51.755: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Nov 13 01:01:51.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2429 get pod pause -L testing-label' Nov 13 01:01:51.903: INFO: stderr: "" Nov 13 01:01:51.903: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources Nov 13 01:01:51.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2429 delete --grace-period=0 --force -f -' Nov 13 01:01:52.022: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 13 01:01:52.022: INFO: stdout: "pod \"pause\" force deleted\n" Nov 13 01:01:52.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2429 get rc,svc -l name=pause --no-headers' Nov 13 01:01:52.207: INFO: stderr: "No resources found in kubectl-2429 namespace.\n" Nov 13 01:01:52.207: INFO: stdout: "" Nov 13 01:01:52.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2429 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 13 01:01:52.354: INFO: stderr: "" Nov 13 01:01:52.354: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:52.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2429" for this suite. • [SLOW TEST:5.575 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":20,"skipped":241,"failed":0} [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:52.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Nov 13 01:01:52.398: INFO: Waiting up to 5m0s for pod "downward-api-064a9ac1-4288-4dd6-b022-714abb4242a0" in namespace "downward-api-2488" to be "Succeeded or Failed" Nov 13 01:01:52.400: INFO: Pod "downward-api-064a9ac1-4288-4dd6-b022-714abb4242a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195717ms Nov 13 01:01:54.403: INFO: Pod "downward-api-064a9ac1-4288-4dd6-b022-714abb4242a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005318811s Nov 13 01:01:56.407: INFO: Pod "downward-api-064a9ac1-4288-4dd6-b022-714abb4242a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009821199s STEP: Saw pod success Nov 13 01:01:56.407: INFO: Pod "downward-api-064a9ac1-4288-4dd6-b022-714abb4242a0" satisfied condition "Succeeded or Failed" Nov 13 01:01:56.409: INFO: Trying to get logs from node node1 pod downward-api-064a9ac1-4288-4dd6-b022-714abb4242a0 container dapi-container: STEP: delete the pod Nov 13 01:01:56.421: INFO: Waiting for pod downward-api-064a9ac1-4288-4dd6-b022-714abb4242a0 to disappear Nov 13 01:01:56.424: INFO: Pod downward-api-064a9ac1-4288-4dd6-b022-714abb4242a0 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:56.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2488" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":241,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:56.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added Nov 13 01:01:56.478: INFO: Found Service test-service-9nktl in namespace services-1524 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Nov 13 01:01:56.478: INFO: Service test-service-9nktl created STEP: Getting /status Nov 13 01:01:56.481: INFO: Service test-service-9nktl has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Nov 13 01:01:56.486: INFO: observed Service test-service-9nktl in namespace services-1524 with annotations: map[] & LoadBalancer: {[]} Nov 13 01:01:56.486: INFO: Found Service test-service-9nktl in namespace services-1524 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Nov 13 01:01:56.486: INFO: Service test-service-9nktl has service status patched STEP: updating the ServiceStatus Nov 13 01:01:56.496: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Nov 13 01:01:56.497: INFO: Observed Service test-service-9nktl in namespace services-1524 with annotations: map[] & Conditions: {[]} Nov 13 01:01:56.497: INFO: Observed event: &Service{ObjectMeta:{test-service-9nktl services-1524 2e135104-5ee6-46f0-b458-aa2a2337267a 73976 0 2021-11-13 01:01:56 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-11-13 01:01:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.233.59.18,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.233.59.18],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Nov 13 01:01:56.498: INFO: Found Service test-service-9nktl in namespace services-1524 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Nov 13 01:01:56.498: INFO: Service test-service-9nktl has service status updated STEP: patching the service STEP: watching for the Service to be patched Nov 13 01:01:56.512: INFO: observed Service test-service-9nktl in namespace services-1524 with labels: map[test-service-static:true] Nov 13 01:01:56.512: INFO: observed Service test-service-9nktl in namespace services-1524 with labels: map[test-service-static:true] Nov 13 01:01:56.512: INFO: observed Service test-service-9nktl in namespace services-1524 with labels: map[test-service-static:true] Nov 13 01:01:56.512: INFO: Found Service test-service-9nktl in namespace services-1524 with labels: map[test-service:patched test-service-static:true] Nov 13 01:01:56.512: INFO: Service test-service-9nktl patched STEP: deleting the service STEP: watching for the Service to be deleted Nov 13 01:01:56.524: INFO: Observed event: ADDED Nov 13 01:01:56.524: INFO: Observed event: MODIFIED Nov 13 01:01:56.524: INFO: Observed event: MODIFIED Nov 13 01:01:56.525: INFO: Observed event: MODIFIED Nov 13 01:01:56.525: INFO: Found Service test-service-9nktl in namespace services-1524 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Nov 13 01:01:56.525: INFO: Service test-service-9nktl deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:56.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1524" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":22,"skipped":250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:45.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:01:45.327: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Nov 13 01:01:53.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1394 --namespace=crd-publish-openapi-1394 create -f -' Nov 13 01:01:54.480: INFO: stderr: "" Nov 13 01:01:54.480: INFO: stdout: "e2e-test-crd-publish-openapi-2539-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Nov 13 01:01:54.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1394 --namespace=crd-publish-openapi-1394 delete e2e-test-crd-publish-openapi-2539-crds test-cr' Nov 13 01:01:54.651: INFO: stderr: "" Nov 13 01:01:54.651: INFO: stdout: "e2e-test-crd-publish-openapi-2539-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Nov 13 01:01:54.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1394 --namespace=crd-publish-openapi-1394 apply -f -' Nov 13 01:01:55.017: INFO: stderr: "" Nov 13 01:01:55.017: INFO: stdout: "e2e-test-crd-publish-openapi-2539-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Nov 13 01:01:55.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1394 --namespace=crd-publish-openapi-1394 delete e2e-test-crd-publish-openapi-2539-crds test-cr' Nov 13 01:01:55.198: INFO: stderr: "" Nov 13 01:01:55.198: INFO: stdout: "e2e-test-crd-publish-openapi-2539-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Nov 13 01:01:55.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1394 explain e2e-test-crd-publish-openapi-2539-crds' Nov 13 01:01:55.573: INFO: stderr: "" Nov 13 01:01:55.573: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2539-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:01:59.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1394" for this suite. • [SLOW TEST:14.399 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":31,"skipped":662,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:26.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1008 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1008 STEP: creating replication controller externalsvc in namespace services-1008 I1113 01:01:26.758895 38 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1008, replica count: 2 I1113 01:01:29.811026 38 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 01:01:32.812636 38 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 01:01:35.812896 38 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 01:01:38.813059 38 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Nov 13 01:01:38.824: INFO: Creating new exec pod Nov 13 01:01:44.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1008 exec execpodv9xh8 -- /bin/sh -x -c nslookup clusterip-service.services-1008.svc.cluster.local' Nov 13 01:01:45.356: INFO: stderr: "+ nslookup clusterip-service.services-1008.svc.cluster.local\n" Nov 13 01:01:45.356: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-1008.svc.cluster.local\tcanonical name = externalsvc.services-1008.svc.cluster.local.\nName:\texternalsvc.services-1008.svc.cluster.local\nAddress: 10.233.32.252\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1008, will wait for the garbage collector to delete the pods Nov 13 01:01:45.414: INFO: Deleting ReplicationController externalsvc took: 4.297695ms Nov 13 01:01:45.516: INFO: Terminating ReplicationController externalsvc pods took: 101.216613ms Nov 13 01:02:01.736: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:01.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1008" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:35.026 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":35,"skipped":493,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:37.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:05.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9452" for this suite. • [SLOW TEST:28.062 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":19,"skipped":359,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:42.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-cv7m STEP: Creating a pod to test atomic-volume-subpath Nov 13 01:01:42.911: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cv7m" in namespace "subpath-5169" to be "Succeeded or Failed" Nov 13 01:01:42.916: INFO: Pod "pod-subpath-test-configmap-cv7m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237713ms Nov 13 01:01:44.919: INFO: Pod "pod-subpath-test-configmap-cv7m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007459425s Nov 13 01:01:46.922: INFO: Pod "pod-subpath-test-configmap-cv7m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01039299s Nov 13 01:01:48.925: INFO: Pod "pod-subpath-test-configmap-cv7m": Phase="Running", Reason="", readiness=true. Elapsed: 6.014206831s Nov 13 01:01:50.928: INFO: Pod "pod-subpath-test-configmap-cv7m": Phase="Running", Reason="", readiness=true. Elapsed: 8.017158175s Nov 13 01:01:52.932: INFO: Pod "pod-subpath-test-configmap-cv7m": Phase="Running", Reason="", readiness=true. Elapsed: 10.020475757s Nov 13 01:01:54.935: INFO: Pod "pod-subpath-test-configmap-cv7m": Phase="Running", Reason="", readiness=true. Elapsed: 12.023557855s Nov 13 01:01:56.938: INFO: Pod "pod-subpath-test-configmap-cv7m": Phase="Running", Reason="", readiness=true. Elapsed: 14.026860393s Nov 13 01:01:58.943: INFO: Pod "pod-subpath-test-configmap-cv7m": Phase="Running", Reason="", readiness=true. Elapsed: 16.03153643s Nov 13 01:02:00.946: INFO: Pod "pod-subpath-test-configmap-cv7m": Phase="Running", Reason="", readiness=true. Elapsed: 18.034873158s Nov 13 01:02:02.951: INFO: Pod "pod-subpath-test-configmap-cv7m": Phase="Running", Reason="", readiness=true. Elapsed: 20.039502288s Nov 13 01:02:04.954: INFO: Pod "pod-subpath-test-configmap-cv7m": Phase="Running", Reason="", readiness=true. Elapsed: 22.042476358s Nov 13 01:02:06.957: INFO: Pod "pod-subpath-test-configmap-cv7m": Phase="Running", Reason="", readiness=true. Elapsed: 24.045958858s Nov 13 01:02:08.962: INFO: Pod "pod-subpath-test-configmap-cv7m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.050446272s STEP: Saw pod success Nov 13 01:02:08.962: INFO: Pod "pod-subpath-test-configmap-cv7m" satisfied condition "Succeeded or Failed" Nov 13 01:02:08.964: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-cv7m container test-container-subpath-configmap-cv7m: STEP: delete the pod Nov 13 01:02:08.978: INFO: Waiting for pod pod-subpath-test-configmap-cv7m to disappear Nov 13 01:02:08.980: INFO: Pod pod-subpath-test-configmap-cv7m no longer exists STEP: Deleting pod pod-subpath-test-configmap-cv7m Nov 13 01:02:08.980: INFO: Deleting pod "pod-subpath-test-configmap-cv7m" in namespace "subpath-5169" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:08.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5169" for this suite. • [SLOW TEST:26.120 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":28,"skipped":355,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:01.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-987 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-987 I1113 01:02:01.793907 38 runners.go:190] Created replication controller with name: externalname-service, namespace: services-987, replica count: 2 I1113 01:02:04.845245 38 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 13 01:02:04.845: INFO: Creating new exec pod Nov 13 01:02:09.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-987 exec execpodnjvjm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 13 01:02:10.175: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 13 01:02:10.175: INFO: stdout: "" Nov 13 01:02:11.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-987 exec execpodnjvjm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 13 01:02:11.447: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 13 01:02:11.447: INFO: stdout: "" Nov 13 01:02:12.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-987 exec execpodnjvjm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 13 01:02:12.434: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 13 01:02:12.434: INFO: stdout: "" Nov 13 01:02:13.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-987 exec execpodnjvjm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Nov 13 01:02:13.462: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Nov 13 01:02:13.462: INFO: stdout: "externalname-service-48thv" Nov 13 01:02:13.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-987 exec execpodnjvjm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.25.111 80' Nov 13 01:02:13.711: INFO: stderr: "+ nc -v -t -w 2 10.233.25.111 80\nConnection to 10.233.25.111 80 port [tcp/http] succeeded!\n+ echo hostName\n" Nov 13 01:02:13.711: INFO: stdout: "externalname-service-q6t7s" Nov 13 01:02:13.711: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:13.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-987" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:11.978 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":36,"skipped":495,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:49.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Nov 13 01:01:49.899: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:15.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1729" for this suite. • [SLOW TEST:25.355 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":20,"skipped":388,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:15.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Nov 13 01:02:15.615: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 01:02:15.631: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 01:02:17.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362135, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362135, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362135, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362135, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 01:02:20.649: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:20.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3348" for this suite. STEP: Destroying namespace "webhook-3348-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.497 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":21,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:05.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 01:02:05.807: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 01:02:07.816: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362125, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362125, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362125, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362125, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 01:02:10.828: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:20.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9878" for this suite. STEP: Destroying namespace "webhook-9878-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.530 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:20.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods Nov 13 01:02:20.899: INFO: created test-pod-1 Nov 13 01:02:20.908: INFO: created test-pod-2 Nov 13 01:02:20.917: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:20.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5264" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":20,"skipped":362,"failed":0} S ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":22,"skipped":452,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:09.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Nov 13 01:02:09.481: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 01:02:09.493: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 01:02:11.502: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362129, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362129, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 01:02:14.515: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:02:14.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6597-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:22.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4975" for this suite. STEP: Destroying namespace "webhook-4975-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.616 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":29,"skipped":360,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:13.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:02:17.820: INFO: Deleting pod "var-expansion-c83f7f73-b3ff-4841-a4ea-6ccc619438d8" in namespace "var-expansion-4649" Nov 13 01:02:17.825: INFO: Wait up to 5m0s for pod "var-expansion-c83f7f73-b3ff-4841-a4ea-6ccc619438d8" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:23.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4649" for this suite. • [SLOW TEST:10.061 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":37,"skipped":521,"failed":0} S ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:31.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Nov 13 01:01:31.086: INFO: PodSpec: initContainers in spec.initContainers Nov 13 01:02:26.242: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ed0f8bd5-e57c-4372-8f18-6dff80a59972", GenerateName:"", Namespace:"init-container-2082", SelfLink:"", UID:"20cde3a1-98ff-4049-b9b9-21850ab88a0c", ResourceVersion:"74684", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772362091, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"86592191"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.114\"\n ],\n \"mac\": \"1a:f0:cb:04:3f:68\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.114\"\n ],\n \"mac\": \"1a:f0:cb:04:3f:68\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0048de0a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0048de0c0)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0048de0d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0048de0f0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0048de108), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0048de120)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-9vfx2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc004d81f40), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-9vfx2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-9vfx2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-9vfx2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004e816b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0033201c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004e81740)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004e81760)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004e81768), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004e8176c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003d043c0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362091, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362091, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362091, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362091, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.208", PodIP:"10.244.4.114", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.4.114"}}, StartTime:(*v1.Time)(0xc0048de150), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003320310)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003320380)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://e85852e785b3e818ee94aecb994a03f3893ffa820e50d21184022fccbdc1aa6e", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004d81fc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004d81fa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc004e817ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:26.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2082" for this suite. • [SLOW TEST:55.186 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":15,"skipped":262,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:20.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-c7c4dd45-362a-4f99-a348-7ea6bf797d4d STEP: Creating a pod to test consume secrets Nov 13 01:02:20.995: INFO: Waiting up to 5m0s for pod "pod-secrets-84197930-4fd2-4d1e-8547-299fb4a028d4" in namespace "secrets-4582" to be "Succeeded or Failed" Nov 13 01:02:20.998: INFO: Pod "pod-secrets-84197930-4fd2-4d1e-8547-299fb4a028d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.598404ms Nov 13 01:02:23.002: INFO: Pod "pod-secrets-84197930-4fd2-4d1e-8547-299fb4a028d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006503053s Nov 13 01:02:25.006: INFO: Pod "pod-secrets-84197930-4fd2-4d1e-8547-299fb4a028d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010531354s Nov 13 01:02:27.010: INFO: Pod "pod-secrets-84197930-4fd2-4d1e-8547-299fb4a028d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014346147s STEP: Saw pod success Nov 13 01:02:27.010: INFO: Pod "pod-secrets-84197930-4fd2-4d1e-8547-299fb4a028d4" satisfied condition "Succeeded or Failed" Nov 13 01:02:27.012: INFO: Trying to get logs from node node2 pod pod-secrets-84197930-4fd2-4d1e-8547-299fb4a028d4 container secret-volume-test: STEP: delete the pod Nov 13 01:02:27.100: INFO: Waiting for pod pod-secrets-84197930-4fd2-4d1e-8547-299fb4a028d4 to disappear Nov 13 01:02:27.102: INFO: Pod pod-secrets-84197930-4fd2-4d1e-8547-299fb4a028d4 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:27.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4582" for this suite. • [SLOW TEST:6.155 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":367,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:22.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Nov 13 01:02:22.688: INFO: Waiting up to 5m0s for pod "downward-api-a42d3bb8-450a-47a0-8145-455a756ce460" in namespace "downward-api-2606" to be "Succeeded or Failed" Nov 13 01:02:22.690: INFO: Pod "downward-api-a42d3bb8-450a-47a0-8145-455a756ce460": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041149ms Nov 13 01:02:24.694: INFO: Pod "downward-api-a42d3bb8-450a-47a0-8145-455a756ce460": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005249959s Nov 13 01:02:26.697: INFO: Pod "downward-api-a42d3bb8-450a-47a0-8145-455a756ce460": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008507679s Nov 13 01:02:28.701: INFO: Pod "downward-api-a42d3bb8-450a-47a0-8145-455a756ce460": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012326291s STEP: Saw pod success Nov 13 01:02:28.701: INFO: Pod "downward-api-a42d3bb8-450a-47a0-8145-455a756ce460" satisfied condition "Succeeded or Failed" Nov 13 01:02:28.703: INFO: Trying to get logs from node node2 pod downward-api-a42d3bb8-450a-47a0-8145-455a756ce460 container dapi-container: STEP: delete the pod Nov 13 01:02:28.814: INFO: Waiting for pod downward-api-a42d3bb8-450a-47a0-8145-455a756ce460 to disappear Nov 13 01:02:28.816: INFO: Pod downward-api-a42d3bb8-450a-47a0-8145-455a756ce460 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:28.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2606" for this suite. • [SLOW TEST:6.171 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":380,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:20.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 01:02:20.988: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f9714e43-f76a-4d64-9980-5874629bae28" in namespace "projected-2201" to be "Succeeded or Failed" Nov 13 01:02:20.991: INFO: Pod "downwardapi-volume-f9714e43-f76a-4d64-9980-5874629bae28": Phase="Pending", Reason="", readiness=false. Elapsed: 3.116362ms Nov 13 01:02:22.994: INFO: Pod "downwardapi-volume-f9714e43-f76a-4d64-9980-5874629bae28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006405655s Nov 13 01:02:24.998: INFO: Pod "downwardapi-volume-f9714e43-f76a-4d64-9980-5874629bae28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010397894s Nov 13 01:02:27.002: INFO: Pod "downwardapi-volume-f9714e43-f76a-4d64-9980-5874629bae28": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014101991s Nov 13 01:02:29.006: INFO: Pod "downwardapi-volume-f9714e43-f76a-4d64-9980-5874629bae28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018024942s STEP: Saw pod success Nov 13 01:02:29.006: INFO: Pod "downwardapi-volume-f9714e43-f76a-4d64-9980-5874629bae28" satisfied condition "Succeeded or Failed" Nov 13 01:02:29.008: INFO: Trying to get logs from node node2 pod downwardapi-volume-f9714e43-f76a-4d64-9980-5874629bae28 container client-container: STEP: delete the pod Nov 13 01:02:29.103: INFO: Waiting for pod downwardapi-volume-f9714e43-f76a-4d64-9980-5874629bae28 to disappear Nov 13 01:02:29.105: INFO: Pod downwardapi-volume-f9714e43-f76a-4d64-9980-5874629bae28 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:29.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2201" for this suite. • [SLOW TEST:8.203 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:29.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:02:29.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7943 version' Nov 13 01:02:29.376: INFO: stderr: "" Nov 13 01:02:29.376: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.5\", GitCommit:\"aea7bbadd2fc0cd689de94a54e5b7b758869d691\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:10:45Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:12:29Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:29.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7943" for this suite. • ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:28.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:28.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-4934 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:30.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-9536" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:30.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-4934" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":31,"skipped":401,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:27.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-30619b5b-1f95-4c4a-9ac5-96bdbb5074ad STEP: Creating a pod to test consume configMaps Nov 13 01:02:27.235: INFO: Waiting up to 5m0s for pod "pod-configmaps-ac11f7ef-81f1-4638-8225-7cab5f2fa137" in namespace "configmap-1528" to be "Succeeded or Failed" Nov 13 01:02:27.237: INFO: Pod "pod-configmaps-ac11f7ef-81f1-4638-8225-7cab5f2fa137": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102829ms Nov 13 01:02:29.239: INFO: Pod "pod-configmaps-ac11f7ef-81f1-4638-8225-7cab5f2fa137": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004894807s Nov 13 01:02:31.243: INFO: Pod "pod-configmaps-ac11f7ef-81f1-4638-8225-7cab5f2fa137": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008805284s STEP: Saw pod success Nov 13 01:02:31.243: INFO: Pod "pod-configmaps-ac11f7ef-81f1-4638-8225-7cab5f2fa137" satisfied condition "Succeeded or Failed" Nov 13 01:02:31.246: INFO: Trying to get logs from node node1 pod pod-configmaps-ac11f7ef-81f1-4638-8225-7cab5f2fa137 container agnhost-container: STEP: delete the pod Nov 13 01:02:31.258: INFO: Waiting for pod pod-configmaps-ac11f7ef-81f1-4638-8225-7cab5f2fa137 to disappear Nov 13 01:02:31.260: INFO: Pod pod-configmaps-ac11f7ef-81f1-4638-8225-7cab5f2fa137 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:31.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1528" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:43.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Nov 13 01:01:43.170: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Nov 13 01:02:03.361: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:02:11.924: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:32.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8061" for this suite. • [SLOW TEST:49.707 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":27,"skipped":539,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:30.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-8fd199b0-8696-4fcf-ae3f-bcf4db633f18 STEP: Creating a pod to test consume configMaps Nov 13 01:02:31.016: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ed4e47f2-4661-418f-a3a3-db377f8cbbc5" in namespace "projected-3541" to be "Succeeded or Failed" Nov 13 01:02:31.018: INFO: Pod "pod-projected-configmaps-ed4e47f2-4661-418f-a3a3-db377f8cbbc5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.804143ms Nov 13 01:02:33.021: INFO: Pod "pod-projected-configmaps-ed4e47f2-4661-418f-a3a3-db377f8cbbc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005455383s Nov 13 01:02:35.029: INFO: Pod "pod-projected-configmaps-ed4e47f2-4661-418f-a3a3-db377f8cbbc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013202172s STEP: Saw pod success Nov 13 01:02:35.029: INFO: Pod "pod-projected-configmaps-ed4e47f2-4661-418f-a3a3-db377f8cbbc5" satisfied condition "Succeeded or Failed" Nov 13 01:02:35.031: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-ed4e47f2-4661-418f-a3a3-db377f8cbbc5 container agnhost-container: STEP: delete the pod Nov 13 01:02:35.044: INFO: Waiting for pod pod-projected-configmaps-ed4e47f2-4661-418f-a3a3-db377f8cbbc5 to disappear Nov 13 01:02:35.046: INFO: Pod pod-projected-configmaps-ed4e47f2-4661-418f-a3a3-db377f8cbbc5 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:35.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3541" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":413,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:31.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all Nov 13 01:02:31.404: INFO: Waiting up to 5m0s for pod "client-containers-6183339c-e32d-4d97-a222-6ae9afc7629c" in namespace "containers-1584" to be "Succeeded or Failed" Nov 13 01:02:31.409: INFO: Pod "client-containers-6183339c-e32d-4d97-a222-6ae9afc7629c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.959671ms Nov 13 01:02:33.413: INFO: Pod "client-containers-6183339c-e32d-4d97-a222-6ae9afc7629c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009283876s Nov 13 01:02:35.417: INFO: Pod "client-containers-6183339c-e32d-4d97-a222-6ae9afc7629c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012719224s STEP: Saw pod success Nov 13 01:02:35.417: INFO: Pod "client-containers-6183339c-e32d-4d97-a222-6ae9afc7629c" satisfied condition "Succeeded or Failed" Nov 13 01:02:35.420: INFO: Trying to get logs from node node1 pod client-containers-6183339c-e32d-4d97-a222-6ae9afc7629c container agnhost-container: STEP: delete the pod Nov 13 01:02:35.441: INFO: Waiting for pod client-containers-6183339c-e32d-4d97-a222-6ae9afc7629c to disappear Nov 13 01:02:35.443: INFO: Pod client-containers-6183339c-e32d-4d97-a222-6ae9afc7629c no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:35.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1584" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":451,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":24,"skipped":514,"failed":0} [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:29.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:35.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3059" for this suite. • [SLOW TEST:6.107 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":25,"skipped":514,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:35.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container Nov 13 01:02:40.092: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9735 pod-service-account-a46607db-b985-4a02-9441-a45962e639f8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Nov 13 01:02:40.413: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9735 pod-service-account-a46607db-b985-4a02-9441-a45962e639f8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Nov 13 01:02:40.646: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9735 pod-service-account-a46607db-b985-4a02-9441-a45962e639f8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:40.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9735" for this suite. • [SLOW TEST:5.336 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":24,"skipped":498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:35.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 13 01:02:35.101: INFO: Waiting up to 5m0s for pod "pod-074b288b-1cb7-49ec-bce5-6950ddae3f2c" in namespace "emptydir-8315" to be "Succeeded or Failed" Nov 13 01:02:35.103: INFO: Pod "pod-074b288b-1cb7-49ec-bce5-6950ddae3f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335538ms Nov 13 01:02:37.107: INFO: Pod "pod-074b288b-1cb7-49ec-bce5-6950ddae3f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006017067s Nov 13 01:02:39.110: INFO: Pod "pod-074b288b-1cb7-49ec-bce5-6950ddae3f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009534346s Nov 13 01:02:41.113: INFO: Pod "pod-074b288b-1cb7-49ec-bce5-6950ddae3f2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012688025s STEP: Saw pod success Nov 13 01:02:41.114: INFO: Pod "pod-074b288b-1cb7-49ec-bce5-6950ddae3f2c" satisfied condition "Succeeded or Failed" Nov 13 01:02:41.116: INFO: Trying to get logs from node node2 pod pod-074b288b-1cb7-49ec-bce5-6950ddae3f2c container test-container: STEP: delete the pod Nov 13 01:02:41.129: INFO: Waiting for pod pod-074b288b-1cb7-49ec-bce5-6950ddae3f2c to disappear Nov 13 01:02:41.130: INFO: Pod pod-074b288b-1cb7-49ec-bce5-6950ddae3f2c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:41.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8315" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:32.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Nov 13 01:02:32.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4564 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' Nov 13 01:02:33.043: INFO: stderr: "" Nov 13 01:02:33.043: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 Nov 13 01:02:33.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4564 delete pods e2e-test-httpd-pod' Nov 13 01:02:41.357: INFO: stderr: "" Nov 13 01:02:41.357: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:41.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4564" for this suite. • [SLOW TEST:8.478 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":28,"skipped":556,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:35.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container Nov 13 01:02:41.543: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2807 PodName:pod-sharedvolume-bb98830c-f489-460e-956d-5fe01baa0171 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 01:02:41.543: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:02:41.781: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:41.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2807" for this suite. • [SLOW TEST:6.284 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":26,"skipped":518,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:41.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-00b8390d-e666-41b1-82d6-63c7581a7f95 STEP: Creating a pod to test consume secrets Nov 13 01:02:41.404: INFO: Waiting up to 5m0s for pod "pod-secrets-1b57a071-1b7a-4e2e-bf87-0c1fdb963b94" in namespace "secrets-3598" to be "Succeeded or Failed" Nov 13 01:02:41.406: INFO: Pod "pod-secrets-1b57a071-1b7a-4e2e-bf87-0c1fdb963b94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011171ms Nov 13 01:02:43.411: INFO: Pod "pod-secrets-1b57a071-1b7a-4e2e-bf87-0c1fdb963b94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006224947s Nov 13 01:02:45.415: INFO: Pod "pod-secrets-1b57a071-1b7a-4e2e-bf87-0c1fdb963b94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010507233s STEP: Saw pod success Nov 13 01:02:45.415: INFO: Pod "pod-secrets-1b57a071-1b7a-4e2e-bf87-0c1fdb963b94" satisfied condition "Succeeded or Failed" Nov 13 01:02:45.417: INFO: Trying to get logs from node node1 pod pod-secrets-1b57a071-1b7a-4e2e-bf87-0c1fdb963b94 container secret-volume-test: STEP: delete the pod Nov 13 01:02:45.438: INFO: Waiting for pod pod-secrets-1b57a071-1b7a-4e2e-bf87-0c1fdb963b94 to disappear Nov 13 01:02:45.440: INFO: Pod pod-secrets-1b57a071-1b7a-4e2e-bf87-0c1fdb963b94 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:45.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3598" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":557,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:45.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:45.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7698" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":30,"skipped":572,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:41.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Nov 13 01:02:41.845: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:46.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2210" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":27,"skipped":535,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:40.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6062.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6062.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6062.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6062.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 13 01:02:47.015: INFO: DNS probes using dns-6062/dns-test-c527f153-6e63-49ac-938b-5c82fee40893 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:47.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6062" for this suite. • [SLOW TEST:6.079 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":25,"skipped":538,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:23.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Nov 13 01:02:23.870: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:02:32.963: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:51.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3808" for this suite. • [SLOW TEST:27.447 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":38,"skipped":522,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:51.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions Nov 13 01:02:51.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7789 api-versions' Nov 13 01:02:51.467: INFO: stderr: "" Nov 13 01:02:51.467: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:51.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7789" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":39,"skipped":554,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:45.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Nov 13 01:02:45.598: INFO: The status of Pod pod-update-77f98334-7790-4359-9958-0eda365cc5aa is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:02:47.602: INFO: The status of Pod pod-update-77f98334-7790-4359-9958-0eda365cc5aa is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:02:49.603: INFO: The status of Pod pod-update-77f98334-7790-4359-9958-0eda365cc5aa is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:02:51.602: INFO: The status of Pod pod-update-77f98334-7790-4359-9958-0eda365cc5aa is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Nov 13 01:02:52.116: INFO: Successfully updated pod "pod-update-77f98334-7790-4359-9958-0eda365cc5aa" STEP: verifying the updated pod is in kubernetes Nov 13 01:02:52.122: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:52.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1446" for this suite. • [SLOW TEST:6.567 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":419,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:41.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:02:41.174: INFO: Pod name sample-pod: Found 0 pods out of 1 Nov 13 01:02:46.176: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Nov 13 01:02:46.182: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet Nov 13 01:02:46.187: INFO: observed ReplicaSet test-rs in namespace replicaset-9516 with ReadyReplicas 1, AvailableReplicas 1 Nov 13 01:02:46.197: INFO: observed ReplicaSet test-rs in namespace replicaset-9516 with ReadyReplicas 1, AvailableReplicas 1 Nov 13 01:02:46.205: INFO: observed ReplicaSet test-rs in namespace replicaset-9516 with ReadyReplicas 1, AvailableReplicas 1 Nov 13 01:02:46.209: INFO: observed ReplicaSet test-rs in namespace replicaset-9516 with ReadyReplicas 1, AvailableReplicas 1 Nov 13 01:02:51.404: INFO: observed ReplicaSet test-rs in namespace replicaset-9516 with ReadyReplicas 2, AvailableReplicas 2 Nov 13 01:02:53.425: INFO: observed Replicaset test-rs in namespace replicaset-9516 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:53.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9516" for this suite. • [SLOW TEST:12.292 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":34,"skipped":419,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:51.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod Nov 13 01:02:51.521: INFO: The status of Pod pod-hostip-12901b98-90bd-4280-a825-e0dd76361940 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:02:53.527: INFO: The status of Pod pod-hostip-12901b98-90bd-4280-a825-e0dd76361940 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:02:55.526: INFO: The status of Pod pod-hostip-12901b98-90bd-4280-a825-e0dd76361940 is Running (Ready = true) Nov 13 01:02:55.532: INFO: Pod pod-hostip-12901b98-90bd-4280-a825-e0dd76361940 has hostIP: 10.10.190.208 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:55.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7014" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":560,"failed":0} SSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":579,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:52.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-a38797b6-ecb1-40f4-ad35-c75107e9fe0a STEP: Creating a pod to test consume configMaps Nov 13 01:02:52.166: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3372395-94c0-449e-99b9-72819cca3239" in namespace "configmap-5654" to be "Succeeded or Failed" Nov 13 01:02:52.169: INFO: Pod "pod-configmaps-f3372395-94c0-449e-99b9-72819cca3239": Phase="Pending", Reason="", readiness=false. Elapsed: 2.422296ms Nov 13 01:02:54.173: INFO: Pod "pod-configmaps-f3372395-94c0-449e-99b9-72819cca3239": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006667235s Nov 13 01:02:56.179: INFO: Pod "pod-configmaps-f3372395-94c0-449e-99b9-72819cca3239": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01271724s STEP: Saw pod success Nov 13 01:02:56.179: INFO: Pod "pod-configmaps-f3372395-94c0-449e-99b9-72819cca3239" satisfied condition "Succeeded or Failed" Nov 13 01:02:56.182: INFO: Trying to get logs from node node2 pod pod-configmaps-f3372395-94c0-449e-99b9-72819cca3239 container agnhost-container: STEP: delete the pod Nov 13 01:02:56.194: INFO: Waiting for pod pod-configmaps-f3372395-94c0-449e-99b9-72819cca3239 to disappear Nov 13 01:02:56.196: INFO: Pod pod-configmaps-f3372395-94c0-449e-99b9-72819cca3239 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:56.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5654" for this suite. • ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:55.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:02:55.602: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-64526c97-b5a8-4ca6-abd9-23f09c1568c7" in namespace "security-context-test-4560" to be "Succeeded or Failed" Nov 13 01:02:55.604: INFO: Pod "busybox-readonly-false-64526c97-b5a8-4ca6-abd9-23f09c1568c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.349477ms Nov 13 01:02:57.607: INFO: Pod "busybox-readonly-false-64526c97-b5a8-4ca6-abd9-23f09c1568c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00538387s Nov 13 01:02:59.611: INFO: Pod "busybox-readonly-false-64526c97-b5a8-4ca6-abd9-23f09c1568c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008845925s Nov 13 01:02:59.611: INFO: Pod "busybox-readonly-false-64526c97-b5a8-4ca6-abd9-23f09c1568c7" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:02:59.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4560" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:46.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Nov 13 01:02:46.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9436 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Nov 13 01:02:46.707: INFO: stderr: "" Nov 13 01:02:46.707: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Nov 13 01:02:46.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9436 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' Nov 13 01:02:47.137: INFO: stderr: "" Nov 13 01:02:47.137: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Nov 13 01:02:47.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9436 delete pods e2e-test-httpd-pod' Nov 13 01:03:01.556: INFO: stderr: "" Nov 13 01:03:01.556: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:01.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9436" for this suite. • [SLOW TEST:15.039 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":28,"skipped":621,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:25.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1113 01:01:25.549985 30 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:01.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7200" for this suite. • [SLOW TEST:96.052 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":23,"skipped":396,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":579,"failed":0} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:56.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Nov 13 01:02:56.241: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-3805 c1ac1355-971d-4327-9abc-7bd2bb3ca604 75620 0 2021-11-13 01:02:56 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-11-13 01:02:56 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-trg99,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trg99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:02:56.244: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:02:58.249: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:03:00.247: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:03:02.249: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Nov 13 01:03:02.249: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3805 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 01:03:02.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Nov 13 01:03:02.405: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3805 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 01:03:02.405: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:03:02.643: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:02.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3805" for this suite. • [SLOW TEST:6.451 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":33,"skipped":579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:53.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-229d6294-5fd3-43f8-b099-cce2c3ede976 STEP: Creating configMap with name cm-test-opt-upd-cbbf3c70-c972-42ef-9f34-7249854df82c STEP: Creating the pod Nov 13 01:02:53.510: INFO: The status of Pod pod-configmaps-2e18a5eb-b2cf-475c-9388-23790c3d7cc3 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:02:55.516: INFO: The status of Pod pod-configmaps-2e18a5eb-b2cf-475c-9388-23790c3d7cc3 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:02:57.514: INFO: The status of Pod pod-configmaps-2e18a5eb-b2cf-475c-9388-23790c3d7cc3 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:02:59.513: INFO: The status of Pod pod-configmaps-2e18a5eb-b2cf-475c-9388-23790c3d7cc3 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:03:01.514: INFO: The status of Pod pod-configmaps-2e18a5eb-b2cf-475c-9388-23790c3d7cc3 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-229d6294-5fd3-43f8-b099-cce2c3ede976 STEP: Updating configmap cm-test-opt-upd-cbbf3c70-c972-42ef-9f34-7249854df82c STEP: Creating configMap with name cm-test-opt-create-d21efa89-1197-479d-b11f-4122e2a7a767 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:03.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7416" for this suite. • [SLOW TEST:10.176 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":432,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:01.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Nov 13 01:03:01.658: INFO: Waiting up to 5m0s for pod "security-context-259921a5-a4d7-4ae1-9f1f-5f15d5228e25" in namespace "security-context-8384" to be "Succeeded or Failed" Nov 13 01:03:01.660: INFO: Pod "security-context-259921a5-a4d7-4ae1-9f1f-5f15d5228e25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251925ms Nov 13 01:03:03.663: INFO: Pod "security-context-259921a5-a4d7-4ae1-9f1f-5f15d5228e25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005225161s Nov 13 01:03:05.667: INFO: Pod "security-context-259921a5-a4d7-4ae1-9f1f-5f15d5228e25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009128211s STEP: Saw pod success Nov 13 01:03:05.667: INFO: Pod "security-context-259921a5-a4d7-4ae1-9f1f-5f15d5228e25" satisfied condition "Succeeded or Failed" Nov 13 01:03:05.669: INFO: Trying to get logs from node node2 pod security-context-259921a5-a4d7-4ae1-9f1f-5f15d5228e25 container test-container: STEP: delete the pod Nov 13 01:03:05.682: INFO: Waiting for pod security-context-259921a5-a4d7-4ae1-9f1f-5f15d5228e25 to disappear Nov 13 01:03:05.684: INFO: Pod security-context-259921a5-a4d7-4ae1-9f1f-5f15d5228e25 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:05.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8384" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":24,"skipped":420,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:01.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Nov 13 01:03:01.611: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:08.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9864" for this suite. • [SLOW TEST:6.706 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":29,"skipped":635,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:02.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Nov 13 01:03:02.776: INFO: Waiting up to 5m0s for pod "pod-30279dd2-705c-4882-a353-c1303bbf01c4" in namespace "emptydir-2163" to be "Succeeded or Failed" Nov 13 01:03:02.779: INFO: Pod "pod-30279dd2-705c-4882-a353-c1303bbf01c4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.162193ms Nov 13 01:03:04.784: INFO: Pod "pod-30279dd2-705c-4882-a353-c1303bbf01c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007638885s Nov 13 01:03:06.787: INFO: Pod "pod-30279dd2-705c-4882-a353-c1303bbf01c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010458983s Nov 13 01:03:08.791: INFO: Pod "pod-30279dd2-705c-4882-a353-c1303bbf01c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015146783s STEP: Saw pod success Nov 13 01:03:08.791: INFO: Pod "pod-30279dd2-705c-4882-a353-c1303bbf01c4" satisfied condition "Succeeded or Failed" Nov 13 01:03:08.793: INFO: Trying to get logs from node node1 pod pod-30279dd2-705c-4882-a353-c1303bbf01c4 container test-container: STEP: delete the pod Nov 13 01:03:08.805: INFO: Waiting for pod pod-30279dd2-705c-4882-a353-c1303bbf01c4 to disappear Nov 13 01:03:08.808: INFO: Pod pod-30279dd2-705c-4882-a353-c1303bbf01c4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:08.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2163" for this suite. • [SLOW TEST:6.073 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":626,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:03.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:03:03.680: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-897bf5e7-7ff1-46d1-9f19-389b5f89b416" in namespace "security-context-test-5789" to be "Succeeded or Failed" Nov 13 01:03:03.684: INFO: Pod "busybox-privileged-false-897bf5e7-7ff1-46d1-9f19-389b5f89b416": Phase="Pending", Reason="", readiness=false. Elapsed: 3.616842ms Nov 13 01:03:05.687: INFO: Pod "busybox-privileged-false-897bf5e7-7ff1-46d1-9f19-389b5f89b416": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006700154s Nov 13 01:03:07.690: INFO: Pod "busybox-privileged-false-897bf5e7-7ff1-46d1-9f19-389b5f89b416": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009892662s Nov 13 01:03:09.695: INFO: Pod "busybox-privileged-false-897bf5e7-7ff1-46d1-9f19-389b5f89b416": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014686042s Nov 13 01:03:09.695: INFO: Pod "busybox-privileged-false-897bf5e7-7ff1-46d1-9f19-389b5f89b416" satisfied condition "Succeeded or Failed" Nov 13 01:03:09.701: INFO: Got logs for pod "busybox-privileged-false-897bf5e7-7ff1-46d1-9f19-389b5f89b416": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:09.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5789" for this suite. • [SLOW TEST:6.063 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":437,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:05.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Nov 13 01:03:05.739: INFO: Pod name sample-pod: Found 0 pods out of 1 Nov 13 01:03:10.745: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:10.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4634" for this suite. • [SLOW TEST:5.056 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":25,"skipped":429,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:08.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:03:08.406: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Nov 13 01:03:13.410: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 13 01:03:13.411: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Nov 13 01:03:13.429: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9075 ed539570-844b-4292-adc1-6ba480e35b5a 76099 1 2021-11-13 01:03:13 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-11-13 01:03:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005413d78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Nov 13 01:03:13.432: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-9075 20190f14-da19-4721-80d0-6f54be6b7eef 76101 1 2021-11-13 01:03:13 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment ed539570-844b-4292-adc1-6ba480e35b5a 0xc0054be1b7 0xc0054be1b8}] [] [{kube-controller-manager Update apps/v1 2021-11-13 01:03:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed539570-844b-4292-adc1-6ba480e35b5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0054be248 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 13 01:03:13.432: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Nov 13 01:03:13.432: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9075 c5a46618-6640-43ca-9272-cd7a8821f4ea 76100 1 2021-11-13 01:03:08 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment ed539570-844b-4292-adc1-6ba480e35b5a 0xc0054be0a7 0xc0054be0a8}] [] [{e2e.test Update apps/v1 2021-11-13 01:03:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-11-13 01:03:13 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"ed539570-844b-4292-adc1-6ba480e35b5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0054be148 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Nov 13 01:03:13.436: INFO: Pod "test-cleanup-controller-xbv4f" is available: &Pod{ObjectMeta:{test-cleanup-controller-xbv4f test-cleanup-controller- deployment-9075 175f4592-6222-4f4e-92fa-0144c7421506 76068 0 2021-11-13 01:03:08 +0000 UTC map[name:cleanup-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.162" ], "mac": "16:1e:59:c2:80:c1", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.162" ], "mac": "16:1e:59:c2:80:c1", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-controller c5a46618-6640-43ca-9272-cd7a8821f4ea 0xc0054be667 0xc0054be668}] [] [{kube-controller-manager Update v1 2021-11-13 01:03:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c5a46618-6640-43ca-9272-cd7a8821f4ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-11-13 01:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-11-13 01:03:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.162\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6pksb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6pksb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:03:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:03:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:03:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-11-13 01:03:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.162,StartTime:2021-11-13 01:03:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-11-13 01:03:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://de4ad6c775800dfbe035f4ec2c20eca8441faa08a4ace9ffd9e2a8fbc6b4c96c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.162,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 13 01:03:13.436: INFO: Pod "test-cleanup-deployment-5b4d99b59b-fr6fw" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-fr6fw test-cleanup-deployment-5b4d99b59b- deployment-9075 09d2db6a-707e-4784-b482-0edfa8551d33 76104 0 2021-11-13 01:03:13 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b 20190f14-da19-4721-80d0-6f54be6b7eef 0xc0054be85f 0xc0054be870}] [] [{kube-controller-manager Update v1 2021-11-13 01:03:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20190f14-da19-4721-80d0-6f54be6b7eef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gvqgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gvqgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:13.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9075" for this suite. • [SLOW TEST:5.065 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":30,"skipped":670,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:09.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-be69d6b1-c358-4378-b2e9-7f74b5fd0d37 STEP: Creating a pod to test consume secrets Nov 13 01:03:09.771: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-45662970-3e33-4bea-aade-069ab0163a76" in namespace "projected-5250" to be "Succeeded or Failed" Nov 13 01:03:09.776: INFO: Pod "pod-projected-secrets-45662970-3e33-4bea-aade-069ab0163a76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.668217ms Nov 13 01:03:11.779: INFO: Pod "pod-projected-secrets-45662970-3e33-4bea-aade-069ab0163a76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007884206s Nov 13 01:03:13.783: INFO: Pod "pod-projected-secrets-45662970-3e33-4bea-aade-069ab0163a76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011936786s STEP: Saw pod success Nov 13 01:03:13.783: INFO: Pod "pod-projected-secrets-45662970-3e33-4bea-aade-069ab0163a76" satisfied condition "Succeeded or Failed" Nov 13 01:03:13.785: INFO: Trying to get logs from node node2 pod pod-projected-secrets-45662970-3e33-4bea-aade-069ab0163a76 container projected-secret-volume-test: STEP: delete the pod Nov 13 01:03:13.799: INFO: Waiting for pod pod-projected-secrets-45662970-3e33-4bea-aade-069ab0163a76 to disappear Nov 13 01:03:13.801: INFO: Pod pod-projected-secrets-45662970-3e33-4bea-aade-069ab0163a76 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:13.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5250" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":446,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:08.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Nov 13 01:03:08.864: INFO: Waiting up to 5m0s for pod "pod-fbd35908-bbec-477d-bdb6-f2a7b2e853fa" in namespace "emptydir-3286" to be "Succeeded or Failed" Nov 13 01:03:08.867: INFO: Pod "pod-fbd35908-bbec-477d-bdb6-f2a7b2e853fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.844988ms Nov 13 01:03:10.871: INFO: Pod "pod-fbd35908-bbec-477d-bdb6-f2a7b2e853fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006584482s Nov 13 01:03:12.876: INFO: Pod "pod-fbd35908-bbec-477d-bdb6-f2a7b2e853fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011804007s Nov 13 01:03:14.881: INFO: Pod "pod-fbd35908-bbec-477d-bdb6-f2a7b2e853fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016347213s STEP: Saw pod success Nov 13 01:03:14.881: INFO: Pod "pod-fbd35908-bbec-477d-bdb6-f2a7b2e853fa" satisfied condition "Succeeded or Failed" Nov 13 01:03:14.884: INFO: Trying to get logs from node node1 pod pod-fbd35908-bbec-477d-bdb6-f2a7b2e853fa container test-container: STEP: delete the pod Nov 13 01:03:14.895: INFO: Waiting for pod pod-fbd35908-bbec-477d-bdb6-f2a7b2e853fa to disappear Nov 13 01:03:14.897: INFO: Pod pod-fbd35908-bbec-477d-bdb6-f2a7b2e853fa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:14.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3286" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":633,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:10.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Nov 13 01:03:10.825: INFO: Waiting up to 5m0s for pod "pod-f26eb632-98ed-40c0-9c5c-1be4e064c362" in namespace "emptydir-9098" to be "Succeeded or Failed" Nov 13 01:03:10.828: INFO: Pod "pod-f26eb632-98ed-40c0-9c5c-1be4e064c362": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386137ms Nov 13 01:03:12.833: INFO: Pod "pod-f26eb632-98ed-40c0-9c5c-1be4e064c362": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007407481s Nov 13 01:03:14.837: INFO: Pod "pod-f26eb632-98ed-40c0-9c5c-1be4e064c362": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011285197s Nov 13 01:03:16.840: INFO: Pod "pod-f26eb632-98ed-40c0-9c5c-1be4e064c362": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014302902s STEP: Saw pod success Nov 13 01:03:16.840: INFO: Pod "pod-f26eb632-98ed-40c0-9c5c-1be4e064c362" satisfied condition "Succeeded or Failed" Nov 13 01:03:16.842: INFO: Trying to get logs from node node2 pod pod-f26eb632-98ed-40c0-9c5c-1be4e064c362 container test-container: STEP: delete the pod Nov 13 01:03:16.959: INFO: Waiting for pod pod-f26eb632-98ed-40c0-9c5c-1be4e064c362 to disappear Nov 13 01:03:16.961: INFO: Pod pod-f26eb632-98ed-40c0-9c5c-1be4e064c362 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:16.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9098" for this suite. • [SLOW TEST:6.176 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":442,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:13.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition Nov 13 01:03:13.540: INFO: Waiting up to 5m0s for pod "var-expansion-e591f015-bdf1-4f2a-9cc4-c2ccf95f1fdd" in namespace "var-expansion-9349" to be "Succeeded or Failed" Nov 13 01:03:13.542: INFO: Pod "var-expansion-e591f015-bdf1-4f2a-9cc4-c2ccf95f1fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235509ms Nov 13 01:03:15.547: INFO: Pod "var-expansion-e591f015-bdf1-4f2a-9cc4-c2ccf95f1fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006695861s Nov 13 01:03:17.551: INFO: Pod "var-expansion-e591f015-bdf1-4f2a-9cc4-c2ccf95f1fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010721151s Nov 13 01:03:19.558: INFO: Pod "var-expansion-e591f015-bdf1-4f2a-9cc4-c2ccf95f1fdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018149501s STEP: Saw pod success Nov 13 01:03:19.558: INFO: Pod "var-expansion-e591f015-bdf1-4f2a-9cc4-c2ccf95f1fdd" satisfied condition "Succeeded or Failed" Nov 13 01:03:19.560: INFO: Trying to get logs from node node2 pod var-expansion-e591f015-bdf1-4f2a-9cc4-c2ccf95f1fdd container dapi-container: STEP: delete the pod Nov 13 01:03:19.572: INFO: Waiting for pod var-expansion-e591f015-bdf1-4f2a-9cc4-c2ccf95f1fdd to disappear Nov 13 01:03:19.574: INFO: Pod var-expansion-e591f015-bdf1-4f2a-9cc4-c2ccf95f1fdd no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:19.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9349" for this suite. • [SLOW TEST:6.074 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":701,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:14.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:03:14.960: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:20.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-985" for this suite. • [SLOW TEST:6.047 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":36,"skipped":649,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:17.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-ff2fb1ce-d8a5-4263-ad24-3a600c615204 STEP: Creating a pod to test consume secrets Nov 13 01:03:17.183: INFO: Waiting up to 5m0s for pod "pod-secrets-38d481a9-ae16-4e7e-8dff-91563cf53faa" in namespace "secrets-7770" to be "Succeeded or Failed" Nov 13 01:03:17.186: INFO: Pod "pod-secrets-38d481a9-ae16-4e7e-8dff-91563cf53faa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.502369ms Nov 13 01:03:19.189: INFO: Pod "pod-secrets-38d481a9-ae16-4e7e-8dff-91563cf53faa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005915832s Nov 13 01:03:21.193: INFO: Pod "pod-secrets-38d481a9-ae16-4e7e-8dff-91563cf53faa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009084743s STEP: Saw pod success Nov 13 01:03:21.193: INFO: Pod "pod-secrets-38d481a9-ae16-4e7e-8dff-91563cf53faa" satisfied condition "Succeeded or Failed" Nov 13 01:03:21.195: INFO: Trying to get logs from node node2 pod pod-secrets-38d481a9-ae16-4e7e-8dff-91563cf53faa container secret-volume-test: STEP: delete the pod Nov 13 01:03:21.240: INFO: Waiting for pod pod-secrets-38d481a9-ae16-4e7e-8dff-91563cf53faa to disappear Nov 13 01:03:21.243: INFO: Pod pod-secrets-38d481a9-ae16-4e7e-8dff-91563cf53faa no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:21.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7770" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":544,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:59.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:02:59.702: INFO: The status of Pod test-webserver-7373aca1-8cfb-49af-b34e-06677bcc90c1 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:03:01.707: INFO: The status of Pod test-webserver-7373aca1-8cfb-49af-b34e-06677bcc90c1 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:03:03.709: INFO: The status of Pod test-webserver-7373aca1-8cfb-49af-b34e-06677bcc90c1 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:03:05.706: INFO: The status of Pod test-webserver-7373aca1-8cfb-49af-b34e-06677bcc90c1 is Running (Ready = false) Nov 13 01:03:07.705: INFO: The status of Pod test-webserver-7373aca1-8cfb-49af-b34e-06677bcc90c1 is Running (Ready = false) Nov 13 01:03:09.706: INFO: The status of Pod test-webserver-7373aca1-8cfb-49af-b34e-06677bcc90c1 is Running (Ready = false) Nov 13 01:03:11.705: INFO: The status of Pod test-webserver-7373aca1-8cfb-49af-b34e-06677bcc90c1 is Running (Ready = false) Nov 13 01:03:13.707: INFO: The status of Pod test-webserver-7373aca1-8cfb-49af-b34e-06677bcc90c1 is Running (Ready = false) Nov 13 01:03:15.706: INFO: The status of Pod test-webserver-7373aca1-8cfb-49af-b34e-06677bcc90c1 is Running (Ready = false) Nov 13 01:03:17.707: INFO: The status of Pod test-webserver-7373aca1-8cfb-49af-b34e-06677bcc90c1 is Running (Ready = false) Nov 13 01:03:19.705: INFO: The status of Pod test-webserver-7373aca1-8cfb-49af-b34e-06677bcc90c1 is Running (Ready = false) Nov 13 01:03:21.705: INFO: The status of Pod test-webserver-7373aca1-8cfb-49af-b34e-06677bcc90c1 is Running (Ready = true) Nov 13 01:03:21.707: INFO: Container started at 2021-11-13 01:03:04 +0000 UTC, pod became ready at 2021-11-13 01:03:19 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:21.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7072" for this suite. • [SLOW TEST:22.047 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":602,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:13.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-6374/configmap-test-de5e7691-b4b5-4c1a-98e0-939dd774e174 STEP: Creating a pod to test consume configMaps Nov 13 01:03:13.853: INFO: Waiting up to 5m0s for pod "pod-configmaps-a0870435-12f0-45f1-bfe8-8eb7dab70337" in namespace "configmap-6374" to be "Succeeded or Failed" Nov 13 01:03:13.857: INFO: Pod "pod-configmaps-a0870435-12f0-45f1-bfe8-8eb7dab70337": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273955ms Nov 13 01:03:15.859: INFO: Pod "pod-configmaps-a0870435-12f0-45f1-bfe8-8eb7dab70337": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006640423s Nov 13 01:03:17.863: INFO: Pod "pod-configmaps-a0870435-12f0-45f1-bfe8-8eb7dab70337": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010203558s Nov 13 01:03:19.869: INFO: Pod "pod-configmaps-a0870435-12f0-45f1-bfe8-8eb7dab70337": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016441418s Nov 13 01:03:21.873: INFO: Pod "pod-configmaps-a0870435-12f0-45f1-bfe8-8eb7dab70337": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02037557s STEP: Saw pod success Nov 13 01:03:21.873: INFO: Pod "pod-configmaps-a0870435-12f0-45f1-bfe8-8eb7dab70337" satisfied condition "Succeeded or Failed" Nov 13 01:03:21.876: INFO: Trying to get logs from node node2 pod pod-configmaps-a0870435-12f0-45f1-bfe8-8eb7dab70337 container env-test: STEP: delete the pod Nov 13 01:03:21.888: INFO: Waiting for pod pod-configmaps-a0870435-12f0-45f1-bfe8-8eb7dab70337 to disappear Nov 13 01:03:21.890: INFO: Pod pod-configmaps-a0870435-12f0-45f1-bfe8-8eb7dab70337 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:21.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6374" for this suite. • [SLOW TEST:8.081 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":449,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:47.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:02:47.111: INFO: created pod Nov 13 01:02:47.111: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-8819" to be "Succeeded or Failed" Nov 13 01:02:47.114: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.541486ms Nov 13 01:02:49.119: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007581814s Nov 13 01:02:51.122: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011093173s Nov 13 01:02:53.126: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015305975s STEP: Saw pod success Nov 13 01:02:53.127: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Nov 13 01:03:23.127: INFO: polling logs Nov 13 01:03:23.265: INFO: Pod logs: 2021/11/13 01:02:52 OK: Got token 2021/11/13 01:02:52 validating with in-cluster discovery 2021/11/13 01:02:52 OK: got issuer https://kubernetes.default.svc.cluster.local 2021/11/13 01:02:52 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-8819:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1636765967, NotBefore:1636765367, IssuedAt:1636765367, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-8819", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"95b73350-07bd-4d38-baa1-ab75571bcbc9"}}} 2021/11/13 01:02:52 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2021/11/13 01:02:52 OK: Validated signature on JWT 2021/11/13 01:02:52 OK: Got valid claims from token! 2021/11/13 01:02:52 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-8819:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1636765967, NotBefore:1636765367, IssuedAt:1636765367, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-8819", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"95b73350-07bd-4d38-baa1-ab75571bcbc9"}}} Nov 13 01:03:23.265: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:23.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8819" for this suite. • [SLOW TEST:36.207 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":26,"skipped":558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:19.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 13 01:03:20.139: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 13 01:03:22.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362200, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362200, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362200, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362200, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 01:03:25.157: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:25.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6985" for this suite. STEP: Destroying namespace "webhook-6985-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.642 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":32,"skipped":704,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:21.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-b7b5743f-b703-48a1-9208-ee78ba3f8b2f STEP: Creating a pod to test consume configMaps Nov 13 01:03:21.044: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6a6070b3-d919-4840-afc9-6d926056a375" in namespace "projected-7121" to be "Succeeded or Failed" Nov 13 01:03:21.046: INFO: Pod "pod-projected-configmaps-6a6070b3-d919-4840-afc9-6d926056a375": Phase="Pending", Reason="", readiness=false. Elapsed: 2.776184ms Nov 13 01:03:23.050: INFO: Pod "pod-projected-configmaps-6a6070b3-d919-4840-afc9-6d926056a375": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006496835s Nov 13 01:03:25.054: INFO: Pod "pod-projected-configmaps-6a6070b3-d919-4840-afc9-6d926056a375": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009811975s Nov 13 01:03:27.057: INFO: Pod "pod-projected-configmaps-6a6070b3-d919-4840-afc9-6d926056a375": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013202082s STEP: Saw pod success Nov 13 01:03:27.057: INFO: Pod "pod-projected-configmaps-6a6070b3-d919-4840-afc9-6d926056a375" satisfied condition "Succeeded or Failed" Nov 13 01:03:27.059: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-6a6070b3-d919-4840-afc9-6d926056a375 container agnhost-container: STEP: delete the pod Nov 13 01:03:28.182: INFO: Waiting for pod pod-projected-configmaps-6a6070b3-d919-4840-afc9-6d926056a375 to disappear Nov 13 01:03:28.184: INFO: Pod pod-projected-configmaps-6a6070b3-d919-4840-afc9-6d926056a375 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:28.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7121" for this suite. • [SLOW TEST:7.186 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:21.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Nov 13 01:03:21.297: INFO: Waiting up to 5m0s for pod "downward-api-894ca784-2c2b-400d-8b18-8bc1492d7a17" in namespace "downward-api-8636" to be "Succeeded or Failed" Nov 13 01:03:21.300: INFO: Pod "downward-api-894ca784-2c2b-400d-8b18-8bc1492d7a17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.615076ms Nov 13 01:03:23.303: INFO: Pod "downward-api-894ca784-2c2b-400d-8b18-8bc1492d7a17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005909718s Nov 13 01:03:25.305: INFO: Pod "downward-api-894ca784-2c2b-400d-8b18-8bc1492d7a17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008467088s Nov 13 01:03:27.308: INFO: Pod "downward-api-894ca784-2c2b-400d-8b18-8bc1492d7a17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.010987948s STEP: Saw pod success Nov 13 01:03:27.308: INFO: Pod "downward-api-894ca784-2c2b-400d-8b18-8bc1492d7a17" satisfied condition "Succeeded or Failed" Nov 13 01:03:27.310: INFO: Trying to get logs from node node2 pod downward-api-894ca784-2c2b-400d-8b18-8bc1492d7a17 container dapi-container: STEP: delete the pod Nov 13 01:03:28.184: INFO: Waiting for pod downward-api-894ca784-2c2b-400d-8b18-8bc1492d7a17 to disappear Nov 13 01:03:28.186: INFO: Pod downward-api-894ca784-2c2b-400d-8b18-8bc1492d7a17 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:28.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8636" for this suite. • [SLOW TEST:6.930 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":658,"failed":0} S ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":549,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:28.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:28.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6632" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":38,"skipped":726,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:23.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:03:23.353: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:28.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2425" for this suite. • [SLOW TEST:5.564 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":27,"skipped":581,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:25.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 13 01:03:29.366: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:29.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4401" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":724,"failed":0} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:28.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image Nov 13 01:03:28.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5810 create -f -' Nov 13 01:03:29.326: INFO: stderr: "" Nov 13 01:03:29.326: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Nov 13 01:03:29.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5810 diff -f -' Nov 13 01:03:29.676: INFO: rc: 1 Nov 13 01:03:29.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5810 delete -f -' Nov 13 01:03:29.804: INFO: stderr: "" Nov 13 01:03:29.804: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:29.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5810" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":28,"skipped":588,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:29.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 01:03:29.422: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7cd99e3-5853-4fca-a127-41fa7decdd3b" in namespace "projected-7030" to be "Succeeded or Failed" Nov 13 01:03:29.428: INFO: Pod "downwardapi-volume-c7cd99e3-5853-4fca-a127-41fa7decdd3b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.798076ms Nov 13 01:03:31.431: INFO: Pod "downwardapi-volume-c7cd99e3-5853-4fca-a127-41fa7decdd3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009493986s Nov 13 01:03:33.435: INFO: Pod "downwardapi-volume-c7cd99e3-5853-4fca-a127-41fa7decdd3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013446653s Nov 13 01:03:35.439: INFO: Pod "downwardapi-volume-c7cd99e3-5853-4fca-a127-41fa7decdd3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017085868s Nov 13 01:03:37.442: INFO: Pod "downwardapi-volume-c7cd99e3-5853-4fca-a127-41fa7decdd3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020624595s STEP: Saw pod success Nov 13 01:03:37.443: INFO: Pod "downwardapi-volume-c7cd99e3-5853-4fca-a127-41fa7decdd3b" satisfied condition "Succeeded or Failed" Nov 13 01:03:37.445: INFO: Trying to get logs from node node2 pod downwardapi-volume-c7cd99e3-5853-4fca-a127-41fa7decdd3b container client-container: STEP: delete the pod Nov 13 01:03:37.475: INFO: Waiting for pod downwardapi-volume-c7cd99e3-5853-4fca-a127-41fa7decdd3b to disappear Nov 13 01:03:37.477: INFO: Pod downwardapi-volume-c7cd99e3-5853-4fca-a127-41fa7decdd3b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:37.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7030" for this suite. • [SLOW TEST:8.099 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:29.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Nov 13 01:03:29.866: INFO: The status of Pod labelsupdate3bfd6d9d-c81b-4a72-917b-a7a68ac9f7f8 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:03:31.869: INFO: The status of Pod labelsupdate3bfd6d9d-c81b-4a72-917b-a7a68ac9f7f8 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:03:33.871: INFO: The status of Pod labelsupdate3bfd6d9d-c81b-4a72-917b-a7a68ac9f7f8 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:03:35.870: INFO: The status of Pod labelsupdate3bfd6d9d-c81b-4a72-917b-a7a68ac9f7f8 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:03:37.870: INFO: The status of Pod labelsupdate3bfd6d9d-c81b-4a72-917b-a7a68ac9f7f8 is Running (Ready = true) Nov 13 01:03:38.389: INFO: Successfully updated pod "labelsupdate3bfd6d9d-c81b-4a72-917b-a7a68ac9f7f8" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:40.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4719" for this suite. • [SLOW TEST:10.579 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":595,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:21.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Nov 13 01:03:21.965: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:41.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8857" for this suite. • [SLOW TEST:19.749 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":475,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:41.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:41.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-4758" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":40,"skipped":486,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:41.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 13 01:03:41.846: INFO: starting watch STEP: patching STEP: updating Nov 13 01:03:41.854: INFO: waiting for watch events with expected annotations Nov 13 01:03:41.854: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:41.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-5067" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:21.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Nov 13 01:03:22.034: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Nov 13 01:03:24.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:03:26.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:03:28.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:03:30.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 01:03:32.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772362202, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 13 01:03:35.057: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:03:35.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:43.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8473" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:21.484 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":43,"skipped":619,"failed":0} SSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":41,"skipped":518,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:41.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint Nov 13 01:03:41.913: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 STEP: mirroring an update to a custom Endpoint Nov 13 01:03:43.922: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 STEP: mirroring deletion of a custom Endpoint Nov 13 01:03:45.932: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:47.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-3554" for this suite. • [SLOW TEST:6.064 seconds] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":42,"skipped":518,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:37.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:03:37.561: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Nov 13 01:03:46.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1659 --namespace=crd-publish-openapi-1659 create -f -' Nov 13 01:03:46.563: INFO: stderr: "" Nov 13 01:03:46.563: INFO: stdout: "e2e-test-crd-publish-openapi-401-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Nov 13 01:03:46.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1659 --namespace=crd-publish-openapi-1659 delete e2e-test-crd-publish-openapi-401-crds test-foo' Nov 13 01:03:46.731: INFO: stderr: "" Nov 13 01:03:46.731: INFO: stdout: "e2e-test-crd-publish-openapi-401-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Nov 13 01:03:46.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1659 --namespace=crd-publish-openapi-1659 apply -f -' Nov 13 01:03:47.089: INFO: stderr: "" Nov 13 01:03:47.089: INFO: stdout: "e2e-test-crd-publish-openapi-401-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Nov 13 01:03:47.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1659 --namespace=crd-publish-openapi-1659 delete e2e-test-crd-publish-openapi-401-crds test-foo' Nov 13 01:03:47.258: INFO: stderr: "" Nov 13 01:03:47.258: INFO: stdout: "e2e-test-crd-publish-openapi-401-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Nov 13 01:03:47.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1659 --namespace=crd-publish-openapi-1659 create -f -' Nov 13 01:03:47.579: INFO: rc: 1 Nov 13 01:03:47.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1659 --namespace=crd-publish-openapi-1659 apply -f -' Nov 13 01:03:47.895: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Nov 13 01:03:47.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1659 --namespace=crd-publish-openapi-1659 create -f -' Nov 13 01:03:48.225: INFO: rc: 1 Nov 13 01:03:48.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1659 --namespace=crd-publish-openapi-1659 apply -f -' Nov 13 01:03:48.553: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Nov 13 01:03:48.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1659 explain e2e-test-crd-publish-openapi-401-crds' Nov 13 01:03:48.881: INFO: stderr: "" Nov 13 01:03:48.881: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-401-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Nov 13 01:03:48.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1659 explain e2e-test-crd-publish-openapi-401-crds.metadata' Nov 13 01:03:49.217: INFO: stderr: "" Nov 13 01:03:49.217: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-401-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Nov 13 01:03:49.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1659 explain e2e-test-crd-publish-openapi-401-crds.spec' Nov 13 01:03:49.566: INFO: stderr: "" Nov 13 01:03:49.566: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-401-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Nov 13 01:03:49.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1659 explain e2e-test-crd-publish-openapi-401-crds.spec.bars' Nov 13 01:03:49.910: INFO: stderr: "" Nov 13 01:03:49.910: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-401-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Nov 13 01:03:49.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1659 explain e2e-test-crd-publish-openapi-401-crds.spec.bars2' Nov 13 01:03:50.272: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:53.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1659" for this suite. • [SLOW TEST:16.237 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":35,"skipped":749,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:47.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 01:03:48.007: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73d7250a-87c0-43c1-832c-c79a5f3b805c" in namespace "projected-3007" to be "Succeeded or Failed" Nov 13 01:03:48.009: INFO: Pod "downwardapi-volume-73d7250a-87c0-43c1-832c-c79a5f3b805c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.568377ms Nov 13 01:03:50.013: INFO: Pod "downwardapi-volume-73d7250a-87c0-43c1-832c-c79a5f3b805c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006574402s Nov 13 01:03:52.016: INFO: Pod "downwardapi-volume-73d7250a-87c0-43c1-832c-c79a5f3b805c": Phase="Running", Reason="", readiness=true. Elapsed: 4.009293946s Nov 13 01:03:54.020: INFO: Pod "downwardapi-volume-73d7250a-87c0-43c1-832c-c79a5f3b805c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013291342s STEP: Saw pod success Nov 13 01:03:54.020: INFO: Pod "downwardapi-volume-73d7250a-87c0-43c1-832c-c79a5f3b805c" satisfied condition "Succeeded or Failed" Nov 13 01:03:54.023: INFO: Trying to get logs from node node1 pod downwardapi-volume-73d7250a-87c0-43c1-832c-c79a5f3b805c container client-container: STEP: delete the pod Nov 13 01:03:54.049: INFO: Waiting for pod downwardapi-volume-73d7250a-87c0-43c1-832c-c79a5f3b805c to disappear Nov 13 01:03:54.051: INFO: Pod downwardapi-volume-73d7250a-87c0-43c1-832c-c79a5f3b805c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:54.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3007" for this suite. • [SLOW TEST:6.086 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":534,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:28.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Nov 13 01:03:28.409: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 13 01:03:28.409: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 13 01:03:28.413: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 13 01:03:28.413: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 13 01:03:28.419: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 13 01:03:28.419: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 13 01:03:28.435: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 13 01:03:28.435: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 and labels map[test-deployment-static:true] Nov 13 01:03:35.667: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 and labels map[test-deployment-static:true] Nov 13 01:03:35.667: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 and labels map[test-deployment-static:true] Nov 13 01:03:36.086: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Nov 13 01:03:36.091: INFO: observed event type ADDED STEP: waiting for Replicas to scale Nov 13 01:03:36.093: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 Nov 13 01:03:36.093: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 Nov 13 01:03:36.093: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 Nov 13 01:03:36.093: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 Nov 13 01:03:36.093: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 Nov 13 01:03:36.093: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 Nov 13 01:03:36.093: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 Nov 13 01:03:36.093: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 0 Nov 13 01:03:36.093: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 Nov 13 01:03:36.093: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 Nov 13 01:03:36.093: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 Nov 13 01:03:36.093: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 Nov 13 01:03:36.093: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 Nov 13 01:03:36.093: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 Nov 13 01:03:36.096: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 Nov 13 01:03:36.096: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 Nov 13 01:03:36.102: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 Nov 13 01:03:36.102: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 Nov 13 01:03:36.108: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 Nov 13 01:03:36.108: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 Nov 13 01:03:36.115: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 Nov 13 01:03:36.115: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 Nov 13 01:03:40.017: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 Nov 13 01:03:40.017: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 Nov 13 01:03:40.031: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 STEP: listing Deployments Nov 13 01:03:40.035: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Nov 13 01:03:40.047: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Nov 13 01:03:40.054: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Nov 13 01:03:40.054: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Nov 13 01:03:40.058: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Nov 13 01:03:40.065: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Nov 13 01:03:40.071: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Nov 13 01:03:47.805: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Nov 13 01:03:51.974: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] Nov 13 01:03:51.985: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Nov 13 01:03:52.001: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Nov 13 01:03:56.563: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Nov 13 01:03:56.587: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 Nov 13 01:03:56.588: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 Nov 13 01:03:56.588: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 Nov 13 01:03:56.588: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 Nov 13 01:03:56.588: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 1 Nov 13 01:03:56.588: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 Nov 13 01:03:56.588: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 3 Nov 13 01:03:56.588: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 Nov 13 01:03:56.588: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 2 Nov 13 01:03:56.588: INFO: observed Deployment test-deployment in namespace deployment-816 with ReadyReplicas 3 STEP: deleting the Deployment Nov 13 01:03:56.595: INFO: observed event type MODIFIED Nov 13 01:03:56.595: INFO: observed event type MODIFIED Nov 13 01:03:56.596: INFO: observed event type MODIFIED Nov 13 01:03:56.596: INFO: observed event type MODIFIED Nov 13 01:03:56.596: INFO: observed event type MODIFIED Nov 13 01:03:56.596: INFO: observed event type MODIFIED Nov 13 01:03:56.596: INFO: observed event type MODIFIED Nov 13 01:03:56.596: INFO: observed event type MODIFIED Nov 13 01:03:56.596: INFO: observed event type MODIFIED Nov 13 01:03:56.596: INFO: observed event type MODIFIED Nov 13 01:03:56.596: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Nov 13 01:03:56.599: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:03:56.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-816" for this suite. • [SLOW TEST:28.229 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":39,"skipped":728,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:56.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-fcacc370-10fb-48e1-a69f-16d6d32b7fd3 STEP: Creating a pod to test consume configMaps Nov 13 01:03:56.693: INFO: Waiting up to 5m0s for pod "pod-configmaps-49f68e16-586e-43ac-b12b-32b93a9b3d99" in namespace "configmap-3589" to be "Succeeded or Failed" Nov 13 01:03:56.696: INFO: Pod "pod-configmaps-49f68e16-586e-43ac-b12b-32b93a9b3d99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.889611ms Nov 13 01:03:58.700: INFO: Pod "pod-configmaps-49f68e16-586e-43ac-b12b-32b93a9b3d99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007013957s Nov 13 01:04:00.706: INFO: Pod "pod-configmaps-49f68e16-586e-43ac-b12b-32b93a9b3d99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012273988s Nov 13 01:04:02.709: INFO: Pod "pod-configmaps-49f68e16-586e-43ac-b12b-32b93a9b3d99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016031035s STEP: Saw pod success Nov 13 01:04:02.710: INFO: Pod "pod-configmaps-49f68e16-586e-43ac-b12b-32b93a9b3d99" satisfied condition "Succeeded or Failed" Nov 13 01:04:02.712: INFO: Trying to get logs from node node2 pod pod-configmaps-49f68e16-586e-43ac-b12b-32b93a9b3d99 container agnhost-container: STEP: delete the pod Nov 13 01:04:02.724: INFO: Waiting for pod pod-configmaps-49f68e16-586e-43ac-b12b-32b93a9b3d99 to disappear Nov 13 01:04:02.726: INFO: Pod pod-configmaps-49f68e16-586e-43ac-b12b-32b93a9b3d99 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:04:02.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3589" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":755,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:54.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-d6add86f-97b3-401d-b8aa-6cf04e1fcb0a STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:04:04.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1817" for this suite. • [SLOW TEST:10.073 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":535,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:04:02.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-098583fe-b599-4b3d-9f67-cd021ad4c490 STEP: Creating a pod to test consume configMaps Nov 13 01:04:02.778: INFO: Waiting up to 5m0s for pod "pod-configmaps-7feb00b5-1137-45e9-8f57-a098cb348518" in namespace "configmap-3128" to be "Succeeded or Failed" Nov 13 01:04:02.782: INFO: Pod "pod-configmaps-7feb00b5-1137-45e9-8f57-a098cb348518": Phase="Pending", Reason="", readiness=false. Elapsed: 4.360181ms Nov 13 01:04:04.786: INFO: Pod "pod-configmaps-7feb00b5-1137-45e9-8f57-a098cb348518": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008225559s Nov 13 01:04:06.789: INFO: Pod "pod-configmaps-7feb00b5-1137-45e9-8f57-a098cb348518": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011529957s STEP: Saw pod success Nov 13 01:04:06.790: INFO: Pod "pod-configmaps-7feb00b5-1137-45e9-8f57-a098cb348518" satisfied condition "Succeeded or Failed" Nov 13 01:04:06.791: INFO: Trying to get logs from node node2 pod pod-configmaps-7feb00b5-1137-45e9-8f57-a098cb348518 container agnhost-container: STEP: delete the pod Nov 13 01:04:06.808: INFO: Waiting for pod pod-configmaps-7feb00b5-1137-45e9-8f57-a098cb348518 to disappear Nov 13 01:04:06.810: INFO: Pod pod-configmaps-7feb00b5-1137-45e9-8f57-a098cb348518 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:04:06.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3128" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":757,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:04:06.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Nov 13 01:04:06.865: INFO: Waiting up to 5m0s for pod "security-context-3cc07297-e30c-4da0-bbd5-c241dfd95122" in namespace "security-context-4594" to be "Succeeded or Failed" Nov 13 01:04:06.867: INFO: Pod "security-context-3cc07297-e30c-4da0-bbd5-c241dfd95122": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180694ms Nov 13 01:04:08.872: INFO: Pod "security-context-3cc07297-e30c-4da0-bbd5-c241dfd95122": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0070232s Nov 13 01:04:10.879: INFO: Pod "security-context-3cc07297-e30c-4da0-bbd5-c241dfd95122": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01330245s STEP: Saw pod success Nov 13 01:04:10.879: INFO: Pod "security-context-3cc07297-e30c-4da0-bbd5-c241dfd95122" satisfied condition "Succeeded or Failed" Nov 13 01:04:10.882: INFO: Trying to get logs from node node2 pod security-context-3cc07297-e30c-4da0-bbd5-c241dfd95122 container test-container: STEP: delete the pod Nov 13 01:04:10.894: INFO: Waiting for pod security-context-3cc07297-e30c-4da0-bbd5-c241dfd95122 to disappear Nov 13 01:04:10.896: INFO: Pod security-context-3cc07297-e30c-4da0-bbd5-c241dfd95122 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:04:10.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4594" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":42,"skipped":764,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:04:04.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:04:15.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1981" for this suite. • [SLOW TEST:11.110 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":45,"skipped":582,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:04:15.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:04:15.386: INFO: The status of Pod busybox-host-aliases99e8c124-2942-4103-8c33-ba90b4487167 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:04:17.390: INFO: The status of Pod busybox-host-aliases99e8c124-2942-4103-8c33-ba90b4487167 is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:04:19.390: INFO: The status of Pod busybox-host-aliases99e8c124-2942-4103-8c33-ba90b4487167 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:04:19.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-920" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":589,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:53.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-lz92 STEP: Creating a pod to test atomic-volume-subpath Nov 13 01:03:53.876: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lz92" in namespace "subpath-9722" to be "Succeeded or Failed" Nov 13 01:03:53.879: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.43687ms Nov 13 01:03:55.882: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006051006s Nov 13 01:03:57.886: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010235334s Nov 13 01:03:59.890: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013889292s Nov 13 01:04:01.893: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016481441s Nov 13 01:04:03.901: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Running", Reason="", readiness=true. Elapsed: 10.024348255s Nov 13 01:04:05.906: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Running", Reason="", readiness=true. Elapsed: 12.029784664s Nov 13 01:04:07.913: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Running", Reason="", readiness=true. Elapsed: 14.036627268s Nov 13 01:04:09.918: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Running", Reason="", readiness=true. Elapsed: 16.042193456s Nov 13 01:04:11.921: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Running", Reason="", readiness=true. Elapsed: 18.044897266s Nov 13 01:04:13.927: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Running", Reason="", readiness=true. Elapsed: 20.050478782s Nov 13 01:04:15.933: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Running", Reason="", readiness=true. Elapsed: 22.057251022s Nov 13 01:04:17.938: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Running", Reason="", readiness=true. Elapsed: 24.06152611s Nov 13 01:04:19.940: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Running", Reason="", readiness=true. Elapsed: 26.064235514s Nov 13 01:04:21.944: INFO: Pod "pod-subpath-test-projected-lz92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.067871489s STEP: Saw pod success Nov 13 01:04:21.944: INFO: Pod "pod-subpath-test-projected-lz92" satisfied condition "Succeeded or Failed" Nov 13 01:04:21.946: INFO: Trying to get logs from node node2 pod pod-subpath-test-projected-lz92 container test-container-subpath-projected-lz92: STEP: delete the pod Nov 13 01:04:21.965: INFO: Waiting for pod pod-subpath-test-projected-lz92 to disappear Nov 13 01:04:21.967: INFO: Pod pod-subpath-test-projected-lz92 no longer exists STEP: Deleting pod pod-subpath-test-projected-lz92 Nov 13 01:04:21.967: INFO: Deleting pod "pod-subpath-test-projected-lz92" in namespace "subpath-9722" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:04:21.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9722" for this suite. • [SLOW TEST:28.142 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":36,"skipped":782,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:28.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:04:28.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-487" for this suite. • [SLOW TEST:60.047 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":550,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:04:21.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-47c35e00-e43f-425e-bd89-a17de796790a STEP: Creating a pod to test consume secrets Nov 13 01:04:22.025: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-57e63278-e923-4b71-808f-f0ed853d8e94" in namespace "projected-3811" to be "Succeeded or Failed" Nov 13 01:04:22.028: INFO: Pod "pod-projected-secrets-57e63278-e923-4b71-808f-f0ed853d8e94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398931ms Nov 13 01:04:24.032: INFO: Pod "pod-projected-secrets-57e63278-e923-4b71-808f-f0ed853d8e94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006269799s Nov 13 01:04:26.037: INFO: Pod "pod-projected-secrets-57e63278-e923-4b71-808f-f0ed853d8e94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011291203s Nov 13 01:04:28.042: INFO: Pod "pod-projected-secrets-57e63278-e923-4b71-808f-f0ed853d8e94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016188951s Nov 13 01:04:30.045: INFO: Pod "pod-projected-secrets-57e63278-e923-4b71-808f-f0ed853d8e94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019241114s STEP: Saw pod success Nov 13 01:04:30.045: INFO: Pod "pod-projected-secrets-57e63278-e923-4b71-808f-f0ed853d8e94" satisfied condition "Succeeded or Failed" Nov 13 01:04:30.048: INFO: Trying to get logs from node node2 pod pod-projected-secrets-57e63278-e923-4b71-808f-f0ed853d8e94 container projected-secret-volume-test: STEP: delete the pod Nov 13 01:04:30.060: INFO: Waiting for pod pod-projected-secrets-57e63278-e923-4b71-808f-f0ed853d8e94 to disappear Nov 13 01:04:30.063: INFO: Pod pod-projected-secrets-57e63278-e923-4b71-808f-f0ed853d8e94 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:04:30.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3811" for this suite. • [SLOW TEST:8.080 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":788,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ Nov 13 01:04:30.110: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:04:10.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Nov 13 01:04:10.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-103 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Nov 13 01:04:11.121: INFO: stderr: "" Nov 13 01:04:11.121: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Nov 13 01:04:16.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-103 get pod e2e-test-httpd-pod -o json' Nov 13 01:04:16.333: INFO: stderr: "" Nov 13 01:04:16.333: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.3.172\\\"\\n ],\\n \\\"mac\\\": \\\"8a:24:38:3d:a3:fa\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.3.172\\\"\\n ],\\n \\\"mac\\\": \\\"8a:24:38:3d:a3:fa\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2021-11-13T01:04:11Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-103\",\n \"resourceVersion\": \"77627\",\n \"uid\": \"aab3d3b2-3b09-4e6c-8594-4ebc70de5103\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-gpkfq\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node1\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-gpkfq\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-11-13T01:04:11Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-11-13T01:04:14Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-11-13T01:04:14Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-11-13T01:04:11Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://41ce7f0e2f97200294d0b2f8eb624d1abbd342e0edd1009dfad84545a8410cb3\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-11-13T01:04:13Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.207\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.3.172\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.3.172\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-11-13T01:04:11Z\"\n }\n}\n" STEP: replace the image in the pod Nov 13 01:04:16.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-103 replace -f -' Nov 13 01:04:16.747: INFO: stderr: "" Nov 13 01:04:16.747: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 Nov 13 01:04:16.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-103 delete pods e2e-test-httpd-pod' Nov 13 01:04:31.383: INFO: stderr: "" Nov 13 01:04:31.383: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:04:31.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-103" for this suite. • [SLOW TEST:20.462 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":43,"skipped":776,"failed":0} Nov 13 01:04:31.394: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:04:28.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Nov 13 01:04:28.309: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fdaccc79-a725-44bb-8470-b5fe37923e75" in namespace "downward-api-900" to be "Succeeded or Failed" Nov 13 01:04:28.311: INFO: Pod "downwardapi-volume-fdaccc79-a725-44bb-8470-b5fe37923e75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128842ms Nov 13 01:04:30.314: INFO: Pod "downwardapi-volume-fdaccc79-a725-44bb-8470-b5fe37923e75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004868253s Nov 13 01:04:32.318: INFO: Pod "downwardapi-volume-fdaccc79-a725-44bb-8470-b5fe37923e75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008205653s STEP: Saw pod success Nov 13 01:04:32.318: INFO: Pod "downwardapi-volume-fdaccc79-a725-44bb-8470-b5fe37923e75" satisfied condition "Succeeded or Failed" Nov 13 01:04:32.320: INFO: Trying to get logs from node node2 pod downwardapi-volume-fdaccc79-a725-44bb-8470-b5fe37923e75 container client-container: STEP: delete the pod Nov 13 01:04:32.333: INFO: Waiting for pod downwardapi-volume-fdaccc79-a725-44bb-8470-b5fe37923e75 to disappear Nov 13 01:04:32.334: INFO: Pod downwardapi-volume-fdaccc79-a725-44bb-8470-b5fe37923e75 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:04:32.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-900" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":566,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} Nov 13 01:04:32.346: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:43.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W1113 01:03:49.315413 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:04:51.335: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:04:51.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-447" for this suite. • [SLOW TEST:68.089 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":44,"skipped":631,"failed":0} Nov 13 01:04:51.347: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:59.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-367df840-b443-4abc-8bae-b80ad991e9d2 in namespace container-probe-4482 Nov 13 01:02:03.764: INFO: Started pod test-webserver-367df840-b443-4abc-8bae-b80ad991e9d2 in namespace container-probe-4482 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 01:02:03.766: INFO: Initial restart count of pod test-webserver-367df840-b443-4abc-8bae-b80ad991e9d2 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:06:04.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4482" for this suite. • [SLOW TEST:244.569 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":671,"failed":0} Nov 13 01:06:04.295: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:02:26.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-f19e009b-8aca-4945-b402-fd8fdcd41bf6 in namespace container-probe-4857 Nov 13 01:02:32.344: INFO: Started pod busybox-f19e009b-8aca-4945-b402-fd8fdcd41bf6 in namespace container-probe-4857 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 01:02:32.347: INFO: Initial restart count of pod busybox-f19e009b-8aca-4945-b402-fd8fdcd41bf6 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:06:32.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4857" for this suite. • [SLOW TEST:246.605 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":285,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} Nov 13 01:06:32.899: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:19.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-4573 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4573 STEP: Creating statefulset with conflicting port in namespace statefulset-4573 STEP: Waiting until pod test-pod will start running in namespace statefulset-4573 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4573 Nov 13 01:06:23.288: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001681980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001681980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001681980, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Nov 13 01:06:23.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4573 describe po test-pod' Nov 13 01:06:23.487: INFO: stderr: "" Nov 13 01:06:23.487: INFO: stdout: "Name: test-pod\nNamespace: statefulset-4573\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Sat, 13 Nov 2021 01:01:19 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.136\"\n ],\n \"mac\": \"56:40:78:a2:35:23\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.136\"\n ],\n \"mac\": \"56:40:78:a2:35:23\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.3.136\nIPs:\n IP: 10.244.3.136\nContainers:\n webserver:\n Container ID: docker://dfc4fccb2b12a6b7b861cca161cb6db82353f97f08d010b012025c9e569b3021\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Sat, 13 Nov 2021 01:01:21 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zt56n (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-zt56n:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulling 5m3s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n Normal Pulled 5m2s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" in 288.353293ms\n Normal Created 5m2s kubelet Created container webserver\n Normal Started 5m2s kubelet Started container webserver\n" Nov 13 01:06:23.487: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-4573 Priority: 0 Node: node1/10.10.190.207 Start Time: Sat, 13 Nov 2021 01:01:19 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.136" ], "mac": "56:40:78:a2:35:23", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.136" ], "mac": "56:40:78:a2:35:23", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.3.136 IPs: IP: 10.244.3.136 Containers: webserver: Container ID: docker://dfc4fccb2b12a6b7b861cca161cb6db82353f97f08d010b012025c9e569b3021 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Sat, 13 Nov 2021 01:01:21 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zt56n (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-zt56n: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m3s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Normal Pulled 5m2s kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 288.353293ms Normal Created 5m2s kubelet Created container webserver Normal Started 5m2s kubelet Started container webserver Nov 13 01:06:23.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4573 logs test-pod --tail=100' Nov 13 01:06:23.649: INFO: stderr: "" Nov 13 01:06:23.649: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.136. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.136. Set the 'ServerName' directive globally to suppress this message\n[Sat Nov 13 01:01:21.591438 2021] [mpm_event:notice] [pid 1:tid 139804909554536] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Nov 13 01:01:21.591472 2021] [core:notice] [pid 1:tid 139804909554536] AH00094: Command line: 'httpd -D FOREGROUND'\n" Nov 13 01:06:23.649: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.136. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.136. Set the 'ServerName' directive globally to suppress this message [Sat Nov 13 01:01:21.591438 2021] [mpm_event:notice] [pid 1:tid 139804909554536] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Sat Nov 13 01:01:21.591472 2021] [core:notice] [pid 1:tid 139804909554536] AH00094: Command line: 'httpd -D FOREGROUND' Nov 13 01:06:23.649: INFO: Deleting all statefulset in ns statefulset-4573 Nov 13 01:06:23.651: INFO: Scaling statefulset ss to 0 Nov 13 01:06:23.662: INFO: Waiting for statefulset status.replicas updated to 0 Nov 13 01:06:33.669: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "statefulset-4573". STEP: Found 7 events. Nov 13 01:06:33.681: INFO: At 2021-11-13 01:01:19 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] Nov 13 01:06:33.681: INFO: At 2021-11-13 01:01:19 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100]] Nov 13 01:06:33.681: INFO: At 2021-11-13 01:01:19 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] Nov 13 01:06:33.681: INFO: At 2021-11-13 01:01:20 +0000 UTC - event for test-pod: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Nov 13 01:06:33.681: INFO: At 2021-11-13 01:01:21 +0000 UTC - event for test-pod: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 288.353293ms Nov 13 01:06:33.681: INFO: At 2021-11-13 01:01:21 +0000 UTC - event for test-pod: {kubelet node1} Created: Created container webserver Nov 13 01:06:33.681: INFO: At 2021-11-13 01:01:21 +0000 UTC - event for test-pod: {kubelet node1} Started: Started container webserver Nov 13 01:06:33.685: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:06:33.685: INFO: test-pod node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 01:01:19 +0000 UTC }] Nov 13 01:06:33.685: INFO: Nov 13 01:06:33.694: INFO: Logging node info for node master1 Nov 13 01:06:33.697: INFO: Node Info: &Node{ObjectMeta:{master1 56d66c54-e52b-494a-a758-e4b658c4b245 78298 0 2021-11-12 21:05:50 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:30 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:30 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:30 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:06:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:06:33.698: INFO: Logging kubelet events for node master1 Nov 13 01:06:33.700: INFO: Logging pods the kubelet thinks is on node master1 Nov 13 01:06:33.729: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.729: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 01:06:33.729: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.729: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:06:33.729: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:33.729: INFO: Container docker-registry ready: true, restart count 0 Nov 13 01:06:33.729: INFO: Container nginx ready: true, restart count 0 Nov 13 01:06:33.729: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:33.729: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:06:33.729: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:06:33.729: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.729: INFO: Container kube-scheduler ready: true, restart count 0 Nov 13 01:06:33.729: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.729: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 01:06:33.729: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:06:33.729: INFO: Init container install-cni ready: true, restart count 0 Nov 13 01:06:33.729: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 01:06:33.729: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.729: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:06:33.729: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.729: INFO: Container coredns ready: true, restart count 2 W1113 01:06:33.744406 26 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:06:33.813: INFO: Latency metrics for node master1 Nov 13 01:06:33.813: INFO: Logging node info for node master2 Nov 13 01:06:33.815: INFO: Node Info: &Node{ObjectMeta:{master2 9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 78285 0 2021-11-12 21:06:20 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:28 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:28 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:28 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:06:28 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:06:33.815: INFO: Logging kubelet events for node master2 Nov 13 01:06:33.818: INFO: Logging pods the kubelet thinks is on node master2 Nov 13 01:06:33.843: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.843: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 01:06:33.843: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.843: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 01:06:33.843: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.843: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 01:06:33.843: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:06:33.843: INFO: Init container install-cni ready: true, restart count 0 Nov 13 01:06:33.843: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 01:06:33.843: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.843: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:06:33.843: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.843: INFO: Container coredns ready: true, restart count 1 Nov 13 01:06:33.843: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:33.843: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:06:33.843: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:06:33.843: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.843: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 01:06:33.843: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.843: INFO: Container nfd-controller ready: true, restart count 0 W1113 01:06:33.858336 26 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:06:33.935: INFO: Latency metrics for node master2 Nov 13 01:06:33.935: INFO: Logging node info for node master3 Nov 13 01:06:33.939: INFO: Node Info: &Node{ObjectMeta:{master3 fce0cd54-e4d8-4ce1-b720-522aad2d7989 78301 0 2021-11-12 21:06:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:32 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:32 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:32 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:06:32 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:06:33.939: INFO: Logging kubelet events for node master3 Nov 13 01:06:33.941: INFO: Logging pods the kubelet thinks is on node master3 Nov 13 01:06:33.957: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.957: INFO: Container autoscaler ready: true, restart count 1 Nov 13 01:06:33.957: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.957: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:06:33.957: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:06:33.957: INFO: Init container install-cni ready: true, restart count 0 Nov 13 01:06:33.957: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 01:06:33.957: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.957: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:06:33.957: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:33.957: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:06:33.957: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:06:33.957: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.958: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 01:06:33.958: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.958: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 13 01:06:33.958: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:33.958: INFO: Container kube-scheduler ready: true, restart count 2 W1113 01:06:33.973937 26 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:06:34.044: INFO: Latency metrics for node master3 Nov 13 01:06:34.044: INFO: Logging node info for node node1 Nov 13 01:06:34.047: INFO: Node Info: &Node{ObjectMeta:{node1 6ceb907c-9809-4d18-88c6-b1e10ba80f97 78286 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:28 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:28 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:28 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:06:28 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:06:34.047: INFO: Logging kubelet events for node node1 Nov 13 01:06:34.050: INFO: Logging pods the kubelet thinks is on node node1 Nov 13 01:06:34.065: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.065: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 01:06:34.065: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.065: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:06:34.065: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded) Nov 13 01:06:34.065: INFO: Container discover ready: false, restart count 0 Nov 13 01:06:34.065: INFO: Container init ready: false, restart count 0 Nov 13 01:06:34.065: INFO: Container install ready: false, restart count 0 Nov 13 01:06:34.065: INFO: test-pod started at 2021-11-13 01:01:19 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.065: INFO: Container webserver ready: true, restart count 0 Nov 13 01:06:34.065: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.065: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:06:34.065: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.065: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 01:06:34.065: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded) Nov 13 01:06:34.065: INFO: Container config-reloader ready: true, restart count 0 Nov 13 01:06:34.065: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 01:06:34.065: INFO: Container grafana ready: true, restart count 0 Nov 13 01:06:34.065: INFO: Container prometheus ready: true, restart count 1 Nov 13 01:06:34.065: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.065: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:06:34.065: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:34.065: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:06:34.065: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:06:34.065: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:34.065: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:06:34.065: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:06:34.065: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.065: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:06:34.065: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:06:34.065: INFO: Init container install-cni ready: true, restart count 2 Nov 13 01:06:34.065: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 01:06:34.065: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:34.065: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:06:34.065: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 01:06:34.065: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 01:06:34.065: INFO: Container collectd ready: true, restart count 0 Nov 13 01:06:34.065: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:06:34.065: INFO: Container rbac-proxy ready: true, restart count 0 W1113 01:06:34.078967 26 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:06:34.255: INFO: Latency metrics for node node1 Nov 13 01:06:34.255: INFO: Logging node info for node node2 Nov 13 01:06:34.258: INFO: Node Info: &Node{ObjectMeta:{node2 652722dd-12b1-4529-ba4d-a00c590e4a68 78279 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-12 21:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:25 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:25 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:25 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:06:25 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:06:34.259: INFO: Logging kubelet events for node node2 Nov 13 01:06:34.261: INFO: Logging pods the kubelet thinks is on node node2 Nov 13 01:06:34.281: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.281: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:06:34.281: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.281: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:06:34.281: INFO: execpod-affinity5mp9d started at 2021-11-13 01:04:30 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.281: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 01:06:34.281: INFO: liveness-197a0853-0502-46dc-9e2f-1865252cbcd6 started at 2021-11-13 01:03:40 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.281: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 01:06:34.281: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.281: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:06:34.281: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:06:34.281: INFO: Init container install-cni ready: true, restart count 2 Nov 13 01:06:34.281: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 01:06:34.281: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.281: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:06:34.281: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.281: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 01:06:34.281: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.281: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:06:34.281: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 01:06:34.281: INFO: Container collectd ready: true, restart count 0 Nov 13 01:06:34.281: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:06:34.281: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:06:34.281: INFO: affinity-nodeport-timeout-s6p4f started at 2021-11-13 01:04:21 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.281: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Nov 13 01:06:34.281: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:34.281: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:06:34.281: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:06:34.281: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded) Nov 13 01:06:34.281: INFO: Container discover ready: false, restart count 0 Nov 13 01:06:34.281: INFO: Container init ready: false, restart count 0 Nov 13 01:06:34.281: INFO: Container install ready: false, restart count 0 Nov 13 01:06:34.281: INFO: affinity-nodeport-timeout-n98kj started at 2021-11-13 01:04:21 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.281: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Nov 13 01:06:34.281: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.281: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 01:06:34.281: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:34.281: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:06:34.281: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:06:34.281: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.281: INFO: Container tas-extender ready: true, restart count 0 Nov 13 01:06:34.281: INFO: affinity-nodeport-timeout-wcb8f started at 2021-11-13 01:04:21 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:34.281: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 W1113 01:06:34.303484 26 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:06:34.473: INFO: Latency metrics for node node2 Nov 13 01:06:34.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4573" for this suite. • Failure [315.250 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:06:23.289: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":13,"skipped":201,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} Nov 13 01:06:34.488: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:04:19.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-6887 Nov 13 01:04:19.498: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Nov 13 01:04:21.503: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Nov 13 01:04:21.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Nov 13 01:04:21.801: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Nov 13 01:04:21.801: INFO: stdout: "iptables" Nov 13 01:04:21.801: INFO: proxyMode: iptables Nov 13 01:04:21.809: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 13 01:04:21.811: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-6887 STEP: creating replication controller affinity-nodeport-timeout in namespace services-6887 I1113 01:04:21.824903 29 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-6887, replica count: 3 I1113 01:04:24.876888 29 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 01:04:27.879796 29 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 01:04:30.882163 29 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 13 01:04:30.892: INFO: Creating new exec pod Nov 13 01:04:35.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Nov 13 01:04:36.161: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-timeout 80\n+ echo hostName\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Nov 13 01:04:36.162: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 13 01:04:36.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.48.160 80' Nov 13 01:04:36.432: INFO: stderr: "+ nc -v -t -w 2 10.233.48.160 80\nConnection to 10.233.48.160 80 port [tcp/http] succeeded!\n+ echo hostName\n" Nov 13 01:04:36.432: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 13 01:04:36.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:36.691: INFO: rc: 1 Nov 13 01:04:36.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:37.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:37.941: INFO: rc: 1 Nov 13 01:04:37.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:38.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:39.056: INFO: rc: 1 Nov 13 01:04:39.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:39.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:39.946: INFO: rc: 1 Nov 13 01:04:39.946: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:40.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:40.928: INFO: rc: 1 Nov 13 01:04:40.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:41.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:41.927: INFO: rc: 1 Nov 13 01:04:41.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:42.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:42.947: INFO: rc: 1 Nov 13 01:04:42.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:43.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:43.936: INFO: rc: 1 Nov 13 01:04:43.936: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:44.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:44.943: INFO: rc: 1 Nov 13 01:04:44.943: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:45.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:45.941: INFO: rc: 1 Nov 13 01:04:45.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:46.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:46.920: INFO: rc: 1 Nov 13 01:04:46.920: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:47.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:47.937: INFO: rc: 1 Nov 13 01:04:47.937: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:48.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:48.960: INFO: rc: 1 Nov 13 01:04:48.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:49.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:49.920: INFO: rc: 1 Nov 13 01:04:49.920: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:50.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:50.933: INFO: rc: 1 Nov 13 01:04:50.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:51.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:51.937: INFO: rc: 1 Nov 13 01:04:51.937: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:52.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:52.943: INFO: rc: 1 Nov 13 01:04:52.944: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:53.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:53.947: INFO: rc: 1 Nov 13 01:04:53.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:54.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:54.932: INFO: rc: 1 Nov 13 01:04:54.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:55.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:55.922: INFO: rc: 1 Nov 13 01:04:55.922: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:56.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:56.923: INFO: rc: 1 Nov 13 01:04:56.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:57.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:58.120: INFO: rc: 1 Nov 13 01:04:58.120: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:58.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:58.953: INFO: rc: 1 Nov 13 01:04:58.953: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:04:59.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:04:59.939: INFO: rc: 1 Nov 13 01:04:59.939: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:00.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:00.963: INFO: rc: 1 Nov 13 01:05:00.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:01.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:01.949: INFO: rc: 1 Nov 13 01:05:01.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:02.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:02.938: INFO: rc: 1 Nov 13 01:05:02.939: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:03.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:03.941: INFO: rc: 1 Nov 13 01:05:03.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:04.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:04.935: INFO: rc: 1 Nov 13 01:05:04.936: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:05.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:05.931: INFO: rc: 1 Nov 13 01:05:05.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:06.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:06.945: INFO: rc: 1 Nov 13 01:05:06.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:07.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:07.938: INFO: rc: 1 Nov 13 01:05:07.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:08.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:08.948: INFO: rc: 1 Nov 13 01:05:08.948: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:09.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:09.962: INFO: rc: 1 Nov 13 01:05:09.962: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:10.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:10.950: INFO: rc: 1 Nov 13 01:05:10.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:11.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:11.938: INFO: rc: 1 Nov 13 01:05:11.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:12.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:12.955: INFO: rc: 1 Nov 13 01:05:12.956: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:13.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:13.960: INFO: rc: 1 Nov 13 01:05:13.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:14.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:14.953: INFO: rc: 1 Nov 13 01:05:14.953: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:15.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:15.945: INFO: rc: 1 Nov 13 01:05:15.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:16.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:16.954: INFO: rc: 1 Nov 13 01:05:16.954: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:17.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:17.972: INFO: rc: 1 Nov 13 01:05:17.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:18.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:18.945: INFO: rc: 1 Nov 13 01:05:18.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:19.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:19.938: INFO: rc: 1 Nov 13 01:05:19.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:20.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:20.940: INFO: rc: 1 Nov 13 01:05:20.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:21.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:21.940: INFO: rc: 1 Nov 13 01:05:21.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:22.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:22.955: INFO: rc: 1 Nov 13 01:05:22.955: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:23.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:23.947: INFO: rc: 1 Nov 13 01:05:23.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:24.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:24.940: INFO: rc: 1 Nov 13 01:05:24.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:25.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:25.943: INFO: rc: 1 Nov 13 01:05:25.943: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:26.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:26.966: INFO: rc: 1 Nov 13 01:05:26.966: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:27.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:28.004: INFO: rc: 1 Nov 13 01:05:28.004: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:28.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:28.945: INFO: rc: 1 Nov 13 01:05:28.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:29.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:29.963: INFO: rc: 1 Nov 13 01:05:29.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:30.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:30.955: INFO: rc: 1 Nov 13 01:05:30.955: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:31.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:31.933: INFO: rc: 1 Nov 13 01:05:31.934: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:32.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:32.942: INFO: rc: 1 Nov 13 01:05:32.942: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:33.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:33.934: INFO: rc: 1 Nov 13 01:05:33.934: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:34.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:34.947: INFO: rc: 1 Nov 13 01:05:34.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:35.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:35.980: INFO: rc: 1 Nov 13 01:05:35.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:36.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:36.947: INFO: rc: 1 Nov 13 01:05:36.948: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:37.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:37.940: INFO: rc: 1 Nov 13 01:05:37.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:38.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:38.972: INFO: rc: 1 Nov 13 01:05:38.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:39.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:39.958: INFO: rc: 1 Nov 13 01:05:39.958: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31726 nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:40.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:40.931: INFO: rc: 1 Nov 13 01:05:40.931: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:41.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:41.972: INFO: rc: 1 Nov 13 01:05:41.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:42.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:42.947: INFO: rc: 1 Nov 13 01:05:42.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:43.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:43.949: INFO: rc: 1 Nov 13 01:05:43.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:44.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:44.937: INFO: rc: 1 Nov 13 01:05:44.937: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:45.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:45.965: INFO: rc: 1 Nov 13 01:05:45.965: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:46.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:46.947: INFO: rc: 1 Nov 13 01:05:46.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:47.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:47.938: INFO: rc: 1 Nov 13 01:05:47.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:48.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:48.954: INFO: rc: 1 Nov 13 01:05:48.954: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:49.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:49.941: INFO: rc: 1 Nov 13 01:05:49.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:50.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:50.969: INFO: rc: 1 Nov 13 01:05:50.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:51.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:51.944: INFO: rc: 1 Nov 13 01:05:51.944: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:52.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:52.935: INFO: rc: 1 Nov 13 01:05:52.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:53.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:53.929: INFO: rc: 1 Nov 13 01:05:53.929: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:54.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:54.945: INFO: rc: 1 Nov 13 01:05:54.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:55.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:55.923: INFO: rc: 1 Nov 13 01:05:55.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:56.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:56.942: INFO: rc: 1 Nov 13 01:05:56.942: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:57.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:58.288: INFO: rc: 1 Nov 13 01:05:58.288: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:58.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:58.954: INFO: rc: 1 Nov 13 01:05:58.954: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:05:59.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:05:59.935: INFO: rc: 1 Nov 13 01:05:59.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:00.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:00.959: INFO: rc: 1 Nov 13 01:06:00.959: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:01.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:01.991: INFO: rc: 1 Nov 13 01:06:01.991: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:02.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:02.933: INFO: rc: 1 Nov 13 01:06:02.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:03.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:03.955: INFO: rc: 1 Nov 13 01:06:03.955: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:04.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:04.967: INFO: rc: 1 Nov 13 01:06:04.967: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:05.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:05.955: INFO: rc: 1 Nov 13 01:06:05.955: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:06.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:06.967: INFO: rc: 1 Nov 13 01:06:06.967: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:07.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:07.947: INFO: rc: 1 Nov 13 01:06:07.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:08.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:09.191: INFO: rc: 1 Nov 13 01:06:09.191: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:09.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:09.938: INFO: rc: 1 Nov 13 01:06:09.939: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:10.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:10.920: INFO: rc: 1 Nov 13 01:06:10.920: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:11.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:11.945: INFO: rc: 1 Nov 13 01:06:11.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:12.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:12.953: INFO: rc: 1 Nov 13 01:06:12.953: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:13.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:13.951: INFO: rc: 1 Nov 13 01:06:13.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:14.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:14.963: INFO: rc: 1 Nov 13 01:06:14.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:15.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:15.950: INFO: rc: 1 Nov 13 01:06:15.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:16.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:16.947: INFO: rc: 1 Nov 13 01:06:16.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:17.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:17.947: INFO: rc: 1 Nov 13 01:06:17.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:18.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:18.950: INFO: rc: 1 Nov 13 01:06:18.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:19.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:19.943: INFO: rc: 1 Nov 13 01:06:19.943: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:20.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:20.918: INFO: rc: 1 Nov 13 01:06:20.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:21.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:21.947: INFO: rc: 1 Nov 13 01:06:21.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:22.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:22.951: INFO: rc: 1 Nov 13 01:06:22.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:23.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:23.960: INFO: rc: 1 Nov 13 01:06:23.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:24.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:24.932: INFO: rc: 1 Nov 13 01:06:24.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:25.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:25.983: INFO: rc: 1 Nov 13 01:06:25.983: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:26.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:26.927: INFO: rc: 1 Nov 13 01:06:26.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:27.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:28.278: INFO: rc: 1 Nov 13 01:06:28.278: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:28.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:28.950: INFO: rc: 1 Nov 13 01:06:28.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:29.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:29.953: INFO: rc: 1 Nov 13 01:06:29.953: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:30.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:30.959: INFO: rc: 1 Nov 13 01:06:30.959: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:31.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:31.923: INFO: rc: 1 Nov 13 01:06:31.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:32.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:32.939: INFO: rc: 1 Nov 13 01:06:32.939: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:33.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:33.965: INFO: rc: 1 Nov 13 01:06:33.965: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:34.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:34.974: INFO: rc: 1 Nov 13 01:06:34.974: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:35.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:35.931: INFO: rc: 1 Nov 13 01:06:35.931: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:36.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:36.923: INFO: rc: 1 Nov 13 01:06:36.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:36.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726' Nov 13 01:06:37.172: INFO: rc: 1 Nov 13 01:06:37.172: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6887 exec execpod-affinity5mp9d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31726: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31726 + echo hostName nc: connect to 10.10.190.207 port 31726 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Nov 13 01:06:37.173: FAIL: Unexpected error: <*errors.errorString | 0xc0056414c0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31726 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31726 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc0011eedc0, 0x779f8f8, 0xc004261ce0, 0xc00343af00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0017e2f00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0017e2f00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0017e2f00, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Nov 13 01:06:37.174: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-6887, will wait for the garbage collector to delete the pods Nov 13 01:06:37.250: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 4.58251ms Nov 13 01:06:37.351: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.527059ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-6887". STEP: Found 35 events. Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:19 +0000 UTC - event for kube-proxy-mode-detector: {default-scheduler } Scheduled: Successfully assigned services-6887/kube-proxy-mode-detector to node2 Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:20 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:20 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Started: Started container agnhost-container Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:20 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Created: Created container agnhost-container Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:20 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 333.240989ms Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:21 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-wcb8f Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:21 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-s6p4f Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:21 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-n98kj Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:21 +0000 UTC - event for affinity-nodeport-timeout-n98kj: {default-scheduler } Scheduled: Successfully assigned services-6887/affinity-nodeport-timeout-n98kj to node2 Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:21 +0000 UTC - event for affinity-nodeport-timeout-s6p4f: {default-scheduler } Scheduled: Successfully assigned services-6887/affinity-nodeport-timeout-s6p4f to node2 Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:21 +0000 UTC - event for affinity-nodeport-timeout-wcb8f: {default-scheduler } Scheduled: Successfully assigned services-6887/affinity-nodeport-timeout-wcb8f to node2 Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:21 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Killing: Stopping container agnhost-container Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:22 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:22 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 336.849113ms Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:24 +0000 UTC - event for affinity-nodeport-timeout-n98kj: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 321.784646ms Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:24 +0000 UTC - event for affinity-nodeport-timeout-n98kj: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:25 +0000 UTC - event for affinity-nodeport-timeout-n98kj: {kubelet node2} Created: Created container affinity-nodeport-timeout Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:25 +0000 UTC - event for affinity-nodeport-timeout-wcb8f: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 344.802598ms Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:25 +0000 UTC - event for affinity-nodeport-timeout-wcb8f: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:26 +0000 UTC - event for affinity-nodeport-timeout-n98kj: {kubelet node2} Started: Started container affinity-nodeport-timeout Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:26 +0000 UTC - event for affinity-nodeport-timeout-s6p4f: {kubelet node2} Started: Started container affinity-nodeport-timeout Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:26 +0000 UTC - event for affinity-nodeport-timeout-s6p4f: {kubelet node2} Created: Created container affinity-nodeport-timeout Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:26 +0000 UTC - event for affinity-nodeport-timeout-s6p4f: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 318.566255ms Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:26 +0000 UTC - event for affinity-nodeport-timeout-s6p4f: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:26 +0000 UTC - event for affinity-nodeport-timeout-wcb8f: {kubelet node2} Started: Started container affinity-nodeport-timeout Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:26 +0000 UTC - event for affinity-nodeport-timeout-wcb8f: {kubelet node2} Created: Created container affinity-nodeport-timeout Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:30 +0000 UTC - event for execpod-affinity5mp9d: {default-scheduler } Scheduled: Successfully assigned services-6887/execpod-affinity5mp9d to node2 Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:32 +0000 UTC - event for execpod-affinity5mp9d: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 303.898229ms Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:32 +0000 UTC - event for execpod-affinity5mp9d: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:33 +0000 UTC - event for execpod-affinity5mp9d: {kubelet node2} Created: Created container agnhost-container Nov 13 01:06:51.569: INFO: At 2021-11-13 01:04:33 +0000 UTC - event for execpod-affinity5mp9d: {kubelet node2} Started: Started container agnhost-container Nov 13 01:06:51.569: INFO: At 2021-11-13 01:06:37 +0000 UTC - event for affinity-nodeport-timeout-n98kj: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Nov 13 01:06:51.569: INFO: At 2021-11-13 01:06:37 +0000 UTC - event for affinity-nodeport-timeout-s6p4f: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Nov 13 01:06:51.569: INFO: At 2021-11-13 01:06:37 +0000 UTC - event for affinity-nodeport-timeout-wcb8f: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Nov 13 01:06:51.569: INFO: At 2021-11-13 01:06:37 +0000 UTC - event for execpod-affinity5mp9d: {kubelet node2} Killing: Stopping container agnhost-container Nov 13 01:06:51.571: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 01:06:51.571: INFO: Nov 13 01:06:51.575: INFO: Logging node info for node master1 Nov 13 01:06:51.577: INFO: Node Info: &Node{ObjectMeta:{master1 56d66c54-e52b-494a-a758-e4b658c4b245 78413 0 2021-11-12 21:05:50 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:51 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:51 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:51 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:06:51 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:06:51.578: INFO: Logging kubelet events for node master1 Nov 13 01:06:51.580: INFO: Logging pods the kubelet thinks is on node master1 Nov 13 01:06:51.589: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.589: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 01:06:51.589: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.589: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:06:51.589: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:51.589: INFO: Container docker-registry ready: true, restart count 0 Nov 13 01:06:51.589: INFO: Container nginx ready: true, restart count 0 Nov 13 01:06:51.589: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.589: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 01:06:51.589: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:06:51.589: INFO: Init container install-cni ready: true, restart count 0 Nov 13 01:06:51.589: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 01:06:51.589: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.589: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:06:51.589: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.589: INFO: Container coredns ready: true, restart count 2 Nov 13 01:06:51.589: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:51.589: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:06:51.589: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:06:51.589: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.589: INFO: Container kube-scheduler ready: true, restart count 0 W1113 01:06:51.604254 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:06:51.770: INFO: Latency metrics for node master1 Nov 13 01:06:51.770: INFO: Logging node info for node master2 Nov 13 01:06:51.774: INFO: Node Info: &Node{ObjectMeta:{master2 9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 78395 0 2021-11-12 21:06:20 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:48 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:48 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:48 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:06:48 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:06:51.774: INFO: Logging kubelet events for node master2 Nov 13 01:06:51.776: INFO: Logging pods the kubelet thinks is on node master2 Nov 13 01:06:51.786: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.786: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 01:06:51.786: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.786: INFO: Container nfd-controller ready: true, restart count 0 Nov 13 01:06:51.786: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.786: INFO: Container coredns ready: true, restart count 1 Nov 13 01:06:51.786: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:51.786: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:06:51.786: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:06:51.786: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.786: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 01:06:51.786: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.786: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 01:06:51.786: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.786: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 01:06:51.786: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:06:51.786: INFO: Init container install-cni ready: true, restart count 0 Nov 13 01:06:51.786: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 01:06:51.786: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.786: INFO: Container kube-multus ready: true, restart count 1 W1113 01:06:51.800360 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:06:51.868: INFO: Latency metrics for node master2 Nov 13 01:06:51.868: INFO: Logging node info for node master3 Nov 13 01:06:51.872: INFO: Node Info: &Node{ObjectMeta:{master3 fce0cd54-e4d8-4ce1-b720-522aad2d7989 78382 0 2021-11-12 21:06:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:42 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:42 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:42 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:06:42 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:06:51.872: INFO: Logging kubelet events for node master3 Nov 13 01:06:51.876: INFO: Logging pods the kubelet thinks is on node master3 Nov 13 01:06:51.885: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.885: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 01:06:51.885: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.885: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 13 01:06:51.885: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.885: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 01:06:51.885: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:51.885: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:06:51.885: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:06:51.885: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.885: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:06:51.885: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:06:51.885: INFO: Init container install-cni ready: true, restart count 0 Nov 13 01:06:51.885: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 01:06:51.885: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.885: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:06:51.885: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.885: INFO: Container autoscaler ready: true, restart count 1 W1113 01:06:51.901718 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:06:51.969: INFO: Latency metrics for node master3 Nov 13 01:06:51.969: INFO: Logging node info for node node1 Nov 13 01:06:51.971: INFO: Node Info: &Node{ObjectMeta:{node1 6ceb907c-9809-4d18-88c6-b1e10ba80f97 78396 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:48 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:48 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:48 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:06:48 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:06:51.972: INFO: Logging kubelet events for node node1 Nov 13 01:06:51.975: INFO: Logging pods the kubelet thinks is on node node1 Nov 13 01:06:51.989: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.989: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:06:51.989: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.989: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 01:06:51.989: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded) Nov 13 01:06:51.989: INFO: Container config-reloader ready: true, restart count 0 Nov 13 01:06:51.989: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 01:06:51.989: INFO: Container grafana ready: true, restart count 0 Nov 13 01:06:51.989: INFO: Container prometheus ready: true, restart count 1 Nov 13 01:06:51.989: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.989: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:06:51.989: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:06:51.989: INFO: Init container install-cni ready: true, restart count 2 Nov 13 01:06:51.989: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 01:06:51.989: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.989: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:06:51.989: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:51.989: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:06:51.989: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:06:51.989: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:51.989: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:06:51.989: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:06:51.989: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:51.989: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:06:51.989: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 01:06:51.989: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 01:06:51.989: INFO: Container collectd ready: true, restart count 0 Nov 13 01:06:51.989: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:06:51.989: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:06:51.989: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.989: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 01:06:51.989: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:51.989: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:06:51.989: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded) Nov 13 01:06:51.989: INFO: Container discover ready: false, restart count 0 Nov 13 01:06:51.989: INFO: Container init ready: false, restart count 0 Nov 13 01:06:51.989: INFO: Container install ready: false, restart count 0 W1113 01:06:52.005615 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:06:52.143: INFO: Latency metrics for node node1 Nov 13 01:06:52.143: INFO: Logging node info for node node2 Nov 13 01:06:52.146: INFO: Node Info: &Node{ObjectMeta:{node2 652722dd-12b1-4529-ba4d-a00c590e4a68 78389 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-12 21:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:45 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:45 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 01:06:45 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 01:06:45 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 01:06:52.147: INFO: Logging kubelet events for node node2 Nov 13 01:06:52.149: INFO: Logging pods the kubelet thinks is on node node2 Nov 13 01:06:52.161: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 01:06:52.161: INFO: Container collectd ready: true, restart count 0 Nov 13 01:06:52.161: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:06:52.161: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:06:52.161: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:52.161: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:06:52.161: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:52.161: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:06:52.161: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:06:52.161: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded) Nov 13 01:06:52.161: INFO: Container discover ready: false, restart count 0 Nov 13 01:06:52.161: INFO: Container init ready: false, restart count 0 Nov 13 01:06:52.161: INFO: Container install ready: false, restart count 0 Nov 13 01:06:52.161: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 01:06:52.161: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:06:52.161: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:06:52.161: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:52.161: INFO: Container tas-extender ready: true, restart count 0 Nov 13 01:06:52.161: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:52.161: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 01:06:52.161: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:52.161: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:06:52.161: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:52.161: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:06:52.161: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 01:06:52.161: INFO: Init container install-cni ready: true, restart count 2 Nov 13 01:06:52.161: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 01:06:52.161: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:52.161: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:06:52.162: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:52.162: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 01:06:52.162: INFO: liveness-197a0853-0502-46dc-9e2f-1865252cbcd6 started at 2021-11-13 01:03:40 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:52.162: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 01:06:52.162: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 01:06:52.162: INFO: Container kube-proxy ready: true, restart count 1 W1113 01:06:52.175505 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 01:06:52.365: INFO: Latency metrics for node node2 Nov 13 01:06:52.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6887" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [152.917 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:06:37.173: Unexpected error: <*errors.errorString | 0xc0056414c0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31726 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31726 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":46,"skipped":616,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} Nov 13 01:06:52.384: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:01:56.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1113 01:01:56.605584 22 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:06:56.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-8720" for this suite. • [SLOW TEST:300.050 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:03:40.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-197a0853-0502-46dc-9e2f-1865252cbcd6 in namespace container-probe-2589 Nov 13 01:03:50.476: INFO: Started pod liveness-197a0853-0502-46dc-9e2f-1865252cbcd6 in namespace container-probe-2589 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 01:03:50.478: INFO: Initial restart count of pod liveness-197a0853-0502-46dc-9e2f-1865252cbcd6 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:07:51.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2589" for this suite. • [SLOW TEST:250.668 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":610,"failed":0} Nov 13 01:07:51.108: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":23,"skipped":285,"failed":0} Nov 13 01:06:56.636: INFO: Running AfterSuite actions on all nodes Nov 13 01:07:51.142: INFO: Running AfterSuite actions on node 1 Nov 13 01:07:51.142: INFO: Skipping dumping logs from cluster Summarizing 6 Failures: [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 [Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 Ran 320 of 5770 Specs in 839.620 seconds FAIL! -- 314 Passed | 6 Failed | 0 Pending | 5450 Skipped Ginkgo ran 1 suite in 14m1.269128473s Test Suite Failed